Project

General

Profile

Actions

Bug #47009

closed

TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt'

Added by Sebastian Wagner over 3 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Testing
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Ganesha FSAL, mgr/nfs, qa-suite
Labels (FS):
NFS-cluster
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

sed -n '/2020-08-18T12:13:28.371/,/2020-08-18T12:17:23.469/p' *
2020-08-18T12:13:28.371 INFO:tasks.cephfs_test_runner:Starting test: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2020-08-18T12:13:28.372 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:27.346480+0000 mon.a (mon.0) 397 : audit [INF] from='client.? 172.21.15.36:0/3067776373' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]': finished
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:27.695797+0000 mgr.a (mgr.14247) 35 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "nfs cluster config reset", "clusterid": "invalidtest", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: cluster 2020-08-18T12:13:28.012636+0000 client.admin (client.?) 0 : cluster [INF] Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid
2020-08-18T12:13:28.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:28.012780+0000 mon.a (mon.0) 398 : audit [INF] from='client.? 172.21.15.36:0/164629844' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]: dispatch
2020-08-18T12:13:29.377 INFO:teuthology.orchestra.run.smithi036:> sudo systemctl status nfs-server
2020-08-18T12:13:29.399 INFO:teuthology.orchestra.run.smithi036.stdout:* nfs-server.service - NFS server and services
2020-08-18T12:13:29.400 INFO:teuthology.orchestra.run.smithi036.stdout:   Loaded: loaded (/lib/systemd/system/nfs-server.service; disabled; vendor preset: enabled)
2020-08-18T12:13:29.400 INFO:teuthology.orchestra.run.smithi036.stdout:   Active: inactive (dead)
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:04:04 smithi036 systemd[1]: Starting NFS server and services...
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:04:04 smithi036 systemd[1]: Started NFS server and services.
2020-08-18T12:13:29.402 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:13:04 smithi036 systemd[1]: Stopping NFS server and services...
2020-08-18T12:13:29.402 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:13:04 smithi036 systemd[1]: Stopped NFS server and services.
2020-08-18T12:13:29.404 DEBUG:teuthology.orchestra.run:got remote process result: 3
2020-08-18T12:13:29.404 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early nfs cluster create cephfs test
2020-08-18T12:13:29.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: audit 2020-08-18T12:13:28.348582+0000 mon.a (mon.0) 399 : audit [INF] from='client.? 172.21.15.36:0/164629844' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]': finished
2020-08-18T12:13:29.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: cluster 2020-08-18T12:13:28.692094+0000 client.admin (client.?) 0 : cluster [INF] Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config
2020-08-18T12:13:29.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: audit 2020-08-18T12:13:28.692238+0000 mon.a (mon.0) 400 : audit [INF] from='client.? 172.21.15.36:0/1973330598' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config"]}]: dispatch
2020-08-18T12:13:29.791 INFO:teuthology.orchestra.run.smithi036.stdout:NFS Cluster Created Successfully
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.354619+0000 mon.a (mon.0) 401 : audit [INF] from='client.? 172.21.15.36:0/1973330598' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config"]}]': finished
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.776446+0000 mgr.a (mgr.14247) 37 : audit [DBG] from='client.14294 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "type": "cephfs", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.785356+0000 mgr.a (mgr.14247) 38 : cephadm [INF] Saving service nfs.ganesha-test spec with placement count:1
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.786010+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.785388\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]: dispatch
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.789858+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.785388\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]': finished
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.792400+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.795539+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.797870+0000 mgr.a (mgr.14247) 39 : cephadm [INF] Checking pool "nfs-ganesha" exists for service nfs.ganesha-test
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.798100+0000 mgr.a (mgr.14247) 40 : cephadm [INF] Saving service nfs.ganesha-test spec with placement count:1
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.798617+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.798122\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]: dispatch
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.802947+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.798122\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]': finished
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.803757+0000 mgr.a (mgr.14247) 41 : cephadm [INF] Create daemon ganesha-test.smithi036 on host smithi036 with spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'ganesha-test', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'test'})
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.804059+0000 mgr.a (mgr.14247) 42 : cephadm [INF] Checking pool "nfs-ganesha" exists for service nfs.ganesha-test
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.804216+0000 mgr.a (mgr.14247) 43 : cephadm [INF] Create keyring: client.nfs.ganesha-test.smithi036
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.804550+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-test.smithi036"}]: dispatch
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.805602+0000 mgr.a (mgr.14247) 44 : cephadm [INF] Updating keyring caps: client.nfs.ganesha-test.smithi036
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.805983+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "auth caps", "entity": "client.nfs.ganesha-test.smithi036", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test"]}]: dispatch
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.810709+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "auth caps", "entity": "client.nfs.ganesha-test.smithi036", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test"]}]': finished
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.813262+0000 mgr.a (mgr.14247) 45 : cephadm [INF] Rados config object exists: conf-nfs.ganesha-test
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.813995+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.814959+0000 mgr.a (mgr.14247) 46 : cephadm [INF] Deploying daemon nfs.ganesha-test.smithi036 on smithi036
2020-08-18T12:13:30.641 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.815454+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:13:32.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:32 smithi036 bash[14996]: audit 2020-08-18T12:13:31.658178+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-08-18T12:13:34.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.945089+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:34.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.949127+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:34.647 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.953850+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:34.647 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.956396+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:37.808 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch ls nfs
2020-08-18T12:13:38.115 INFO:teuthology.orchestra.run.smithi036.stdout:NAME              RUNNING  REFRESHED  AGE  PLACEMENT  IMAGE NAME                                                          IMAGE ID
2020-08-18T12:13:38.115 INFO:teuthology.orchestra.run.smithi036.stdout:nfs.ganesha-test      1/1  4s ago     8s   count:1    quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7  27a13cd81179
2020-08-18T12:13:38.395 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:38 smithi036 bash[14996]: audit 2020-08-18T12:13:38.112053+0000 mgr.a (mgr.14247) 51 : audit [DBG] from='client.14303 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "nfs", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:08.131 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs volume create user_test_fs
2020-08-18T12:14:09.649 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:09 smithi036 bash[14996]: audit 2020-08-18T12:14:08.447802+0000 mgr.a (mgr.14247) 67 : audit [DBG] from='client.14307 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "user_test_fs", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:09.650 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:09 smithi036 bash[14996]: audit 2020-08-18T12:14:08.448662+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.meta"}]: dispatch
2020-08-18T12:14:10.406 INFO:teuthology.orchestra.run.smithi036.stderr:new fs with metadata pool 3 and data pool 4
2020-08-18T12:14:10.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: debug 2020-08-18T12:14:10.371+0000 7f3f88120700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2020-08-18T12:14:10.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: audit 2020-08-18T12:14:09.369221+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.meta"}]': finished
2020-08-18T12:14:10.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: cluster 2020-08-18T12:14:09.369383+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e27: 3 total, 3 up, 3 in
2020-08-18T12:14:10.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: audit 2020-08-18T12:14:09.372098+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.data"}]: dispatch
2020-08-18T12:14:11.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.371644+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.data"}]': finished
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.372094+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e28: 3 total, 3 up, 3 in
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.374540+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "fs new", "fs_name": "user_test_fs", "metadata": "cephfs.user_test_fs.meta", "data": "cephfs.user_test_fs.data"}]: dispatch
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.375970+0000 mon.a (mon.0) 429 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.376015+0000 mon.a (mon.0) 430 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.383345+0000 mon.a (mon.0) 431 : cluster [DBG] osdmap e29: 3 total, 3 up, 3 in
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.383558+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "fs new", "fs_name": "user_test_fs", "metadata": "cephfs.user_test_fs.meta", "data": "cephfs.user_test_fs.data"}]': finished
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.383625+0000 mon.a (mon.0) 433 : cluster [DBG] fsmap user_test_fs:0
2020-08-18T12:14:12.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:12 smithi036 bash[14996]: cluster 2020-08-18T12:14:11.397053+0000 mon.a (mon.0) 434 : cluster [DBG] osdmap e30: 3 total, 3 up, 3 in
2020-08-18T12:14:13.894 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:13 smithi036 bash[14996]: cluster 2020-08-18T12:14:12.402837+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e31: 3 total, 3 up, 3 in
2020-08-18T12:14:30.417 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early auth get-or-create-key client.test mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs' mds 'allow rw path=/'
2020-08-18T12:14:30.757 INFO:teuthology.orchestra.run.smithi036.stdout:AQAmxjtfQePTLBAA90sKBZ/mAeDvfQUPUVQ7AA==
2020-08-18T12:14:30.776 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early nfs cluster info test
2020-08-18T12:14:31.107 INFO:teuthology.orchestra.run.smithi036.stdout:{
2020-08-18T12:14:31.107 INFO:teuthology.orchestra.run.smithi036.stdout:    "test": [
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout:        {
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout:            "hostname": "smithi036",
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout:            "ip": [
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout:                "172.21.15.36" 
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout:            ],
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout:            "port": 2049
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout:        }
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout:    ]
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout:}
2020-08-18T12:14:31.116 INFO:teuthology.orchestra.run.smithi036:> sudo ceph nfs cluster config set test -i -
2020-08-18T12:14:31.393 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:30.751880+0000 mon.a (mon.0) 436 : audit [INF] from='client.? 172.21.15.36:0/275914116' entity='client.admin' cmd=[{"prefix": "auth get-or-create-key", "entity": "client.test", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs", "mds", "allow rw path=/"]}]: dispatch
2020-08-18T12:14:31.394 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:30.756918+0000 mon.a (mon.0) 437 : audit [INF] from='client.? 172.21.15.36:0/275914116' entity='client.admin' cmd='[{"prefix": "auth get-or-create-key", "entity": "client.test", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs", "mds", "allow rw path=/"]}]': finished
2020-08-18T12:14:31.394 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:31.098832+0000 mgr.a (mgr.14247) 79 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "nfs cluster info", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.438397+0000 mgr.a (mgr.14247) 81 : audit [DBG] from='client.14315 -' entity='client.admin' cmd=[{"prefix": "nfs cluster config set", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: cephadm 2020-08-18T12:14:31.477406+0000 mgr.a (mgr.14247) 82 : cephadm [INF] Restart service nfs.ganesha-test
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.478191+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:14:32.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.745368+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:14:47.360 INFO:teuthology.orchestra.run.smithi036.stdout:NFS-Ganesha Config Set Successfully
2020-08-18T12:14:47.607 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:47 smithi036 bash[14996]: audit 2020-08-18T12:14:47.361760+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-08-18T12:14:49.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.826093+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi036","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"d36c25661867\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823102\", \"created\": \"2020-08-18T12:10:16.467600\", \"started\": \"2020-08-18T12:10:22.842682\"}, \"mgr.a\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"c590cd729900\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823283\", \"created\": \"2020-08-18T12:10:24.515600\", \"started\": \"2020-08-18T12:12:53.589820\"}, \"osd.0\": {\"daemon_type\": \"osd\", \"daemon_id\": \"0\", \"hostname\": \"smithi036\", \"container_id\": \"185d14a6bc0e\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823393\", \"created\": \"2020-08-18T12:11:29.443599\", \"started\": \"2020-08-18T12:11:32.358036\"}, \"osd.1\": {\"daemon_type\": \"osd\", \"daemon_id\": \"1\", \"hostname\": \"smithi036\", \"container_id\": \"18c28aa439b9\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823644\", \"created\": \"2020-08-18T12:11:47.099598\", \"started\": \"2020-08-18T12:11:49.606862\"}, \"osd.2\": {\"daemon_type\": \"osd\", \"daemon_id\": \"2\", \"hostname\": \"smithi036\", \"container_id\": \"c5ce7db6d02f\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823856\", \"created\": \"2020-08-18T12:12:04.511598\", \"started\": \"2020-08-18T12:12:07.007047\"}, \"mds.1.smithi036.fwyvuo\": {\"daemon_type\": \"mds\", \"daemon_id\": \"1.smithi036.fwyvuo\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": -1, \"status_desc\": \"error\", \"last_refresh\": \"2020-08-18T12:14:48.824061\", \"created\": \"2020-08-18T12:12:48.687597\"}, \"nfs.ganesha-test.smithi036\": {\"daemon_type\": \"nfs\", \"daemon_id\": \"ganesha-test.smithi036\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.824135\", \"created\": \"2020-08-18T12:13:31.627596\"}}, \"devices\": [{\"rejected_reasons\": [\"Insufficient space (<5GB) on vgs\", \"locked\", \"LVM detected\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"osd_id\": \"2\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"b672ef8c-b1b7-4095-9010-9e0dbb7fd560\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tmDUnC-MITj-o9dZ-VrFd-fVuq-YQu7-wAKj21\"}, {\"name\": \"lv_3\", \"osd_id\": \"1\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"00c9da60-5629-446b-81c4-2aa37c742fd3\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tWjwp6-rw22-xfX3-WTsV-NQfm-c2cR-RwkcQ2\"}, {\"name\": \"lv_4\", \"osd_id\": \"0\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"22b971a7-f36a-4ac8-bc5a-bc03c27f6ebd\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"yDckaE-Tm2y-CZ1W-MNvA-Z9at-Sx8F-Wy1sdo\"}, {\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT53300040400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN04\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"cfq\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W4J2QS\"}], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:09.324619\"}, \"mgr.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:10.820348\"}, \"osd.0\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:27.858274\"}, \"osd.1\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:45.513076\"}, \"osd.2\": {\"deps\": [], \"last_config\": \"2020-08-18T12:12:02.780273\"}, \"mds.1.smithi036.fwyvuo\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:01.350567\"}, \"nfs.ganesha-test.smithi036\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:29.803823\"}}, \"last_daemon_update\": \"2020-08-18T12:14:48.824320\", \"last_device_update\": \"2020-08-18T12:12:11.983889\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.36\"], \"172.21.15.254\": [\"172.21.15.36\"], \"fe80::/64\": [\"fe80::ae1f:6bff:fef8:2542\"]}, \"last_host_check\": \"2020-08-18T12:10:50.314546\"}"}]: dispatch
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.830355+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi036","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"d36c25661867\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823102\", \"created\": \"2020-08-18T12:10:16.467600\", \"started\": \"2020-08-18T12:10:22.842682\"}, \"mgr.a\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"c590cd729900\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823283\", \"created\": \"2020-08-18T12:10:24.515600\", \"started\": \"2020-08-18T12:12:53.589820\"}, \"osd.0\": {\"daemon_type\": \"osd\", \"daemon_id\": \"0\", \"hostname\": \"smithi036\", \"container_id\": \"185d14a6bc0e\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823393\", \"created\": \"2020-08-18T12:11:29.443599\", \"started\": \"2020-08-18T12:11:32.358036\"}, \"osd.1\": {\"daemon_type\": \"osd\", \"daemon_id\": \"1\", \"hostname\": \"smithi036\", \"container_id\": \"18c28aa439b9\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823644\", \"created\": \"2020-08-18T12:11:47.099598\", \"started\": \"2020-08-18T12:11:49.606862\"}, \"osd.2\": {\"daemon_type\": \"osd\", \"daemon_id\": \"2\", \"hostname\": \"smithi036\", \"container_id\": \"c5ce7db6d02f\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823856\", \"created\": \"2020-08-18T12:12:04.511598\", \"started\": \"2020-08-18T12:12:07.007047\"}, \"mds.1.smithi036.fwyvuo\": {\"daemon_type\": \"mds\", \"daemon_id\": \"1.smithi036.fwyvuo\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": -1, \"status_desc\": \"error\", \"last_refresh\": \"2020-08-18T12:14:48.824061\", \"created\": \"2020-08-18T12:12:48.687597\"}, \"nfs.ganesha-test.smithi036\": {\"daemon_type\": \"nfs\", \"daemon_id\": \"ganesha-test.smithi036\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.824135\", \"created\": \"2020-08-18T12:13:31.627596\"}}, \"devices\": [{\"rejected_reasons\": [\"Insufficient space (<5GB) on vgs\", \"locked\", \"LVM detected\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"osd_id\": \"2\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"b672ef8c-b1b7-4095-9010-9e0dbb7fd560\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tmDUnC-MITj-o9dZ-VrFd-fVuq-YQu7-wAKj21\"}, {\"name\": \"lv_3\", \"osd_id\": \"1\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"00c9da60-5629-446b-81c4-2aa37c742fd3\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tWjwp6-rw22-xfX3-WTsV-NQfm-c2cR-RwkcQ2\"}, {\"name\": \"lv_4\", \"osd_id\": \"0\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"22b971a7-f36a-4ac8-bc5a-bc03c27f6ebd\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"yDckaE-Tm2y-CZ1W-MNvA-Z9at-Sx8F-Wy1sdo\"}, {\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT53300040400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN04\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"cfq\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W4J2QS\"}], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:09.324619\"}, \"mgr.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:10.820348\"}, \"osd.0\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:27.858274\"}, \"osd.1\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:45.513076\"}, \"osd.2\": {\"deps\": [], \"last_config\": \"2020-08-18T12:12:02.780273\"}, \"mds.1.smithi036.fwyvuo\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:01.350567\"}, \"nfs.ganesha-test.smithi036\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:29.803823\"}}, \"last_daemon_update\": \"2020-08-18T12:14:48.824320\", \"last_device_update\": \"2020-08-18T12:12:11.983889\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.36\"], \"172.21.15.254\": [\"172.21.15.36\"], \"fe80::/64\": [\"fe80::ae1f:6bff:fef8:2542\"]}, \"last_host_check\": \"2020-08-18T12:10:50.314546\"}"}]': finished
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.831848+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.834997+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:15:17.380 INFO:teuthology.orchestra.run.smithi036:> sudo rados -p nfs-ganesha -N test ls
2020-08-18T12:15:17.451 INFO:teuthology.orchestra.run.smithi036.stdout:rec-0000000000000003:nfs.ganesha-test.smithi036
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:userconf-nfs.ganesha-test
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:grace
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:conf-nfs.ganesha-test
2020-08-18T12:15:17.456 INFO:tasks.cephfs.test_nfs:b'rec-0000000000000003:nfs.ganesha-test.smithi036\nuserconf-nfs.ganesha-test\ngrace\nconf-nfs.ganesha-test\n'
2020-08-18T12:15:17.456 INFO:teuthology.orchestra.run.smithi036:> sudo rados -p nfs-ganesha -N test get userconf-nfs.ganesha-test -
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: LOG {
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout:        Default_log_level = FULL_DEBUG;
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout:        }
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout:
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout:        EXPORT {
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout:         Export_Id = 100;
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout:         Transports = TCP;
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout:         Path = /;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout:         Pseudo = /ceph/;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout:         Protocols = 4;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout:         Access_Type = RW;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout:         Attr_Expiration_Time = 0;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout:         Squash = None;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout:         FSAL {
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout:               Name = CEPH;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout:                      Filesystem = user_test_fs;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout:                      User_Id = test;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout:                      Secret_Access_Key = 'AQAmxjtfQePTLBAA90sKBZ/mAeDvfQUPUVQ7AA==';
2020-08-18T12:15:17.508 INFO:teuthology.orchestra.run.smithi036.stdout:         }
2020-08-18T12:15:17.509 INFO:teuthology.orchestra.run.smithi036.stdout:        }
2020-08-18T12:15:17.509 INFO:teuthology.orchestra.run.smithi036:> sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt
2020-08-18T12:17:22.721 INFO:teuthology.orchestra.run.smithi036.stderr:mount.nfs: Connection timed out
2020-08-18T12:17:22.724 DEBUG:teuthology.orchestra.run:got remote process result: 32
2020-08-18T12:17:22.726 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2020-08-18T12:17:23.460 INFO:tasks.cephfs_test_runner:test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) ... ERROR
2020-08-18T12:17:23.461 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:ERROR: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing-2020-08-18-1033/qa/tasks/cephfs/test_nfs.py", line 449, in test_cluster_set_reset_user_config
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner:    self.ctx.cluster.run(args=mnt_cmd)
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/cluster.py", line 64, in run
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner:    return [remote.run(**kwargs) for remote in remotes]
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/cluster.py", line 64, in <listcomp>
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner:    return [remote.run(**kwargs) for remote in remotes]
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner:    r.wait()
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi036 with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt'
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:Ran 3 tests in 274.876s
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.469 INFO:tasks.cephfs_test_runner:FAILED (errors=1)

https://pulpito.ceph.com/swagner-2020-08-18_11:48:28-rados:cephadm-wip-swagner-testing-2020-08-18-1033-distro-basic-smithi/5356227/

Actions #1

Updated by Sebastian Wagner over 3 years ago

  • Project changed from Orchestrator to CephFS
  • Category deleted (cephadm/nfs)
  • Component(FS) Ganesha FSAL added
Actions #2

Updated by Sebastian Wagner over 3 years ago

  • Related to Bug #46862: cephadm: nfs ganesha client mount is read only added
Actions #3

Updated by Sebastian Wagner over 3 years ago

  • Related to deleted (Bug #46862: cephadm: nfs ganesha client mount is read only)
Actions #4

Updated by Varsha Rao over 3 years ago

  • Assignee set to Varsha Rao
Actions #6

Updated by Kefu Chai over 3 years ago

/a/kchai-2020-08-19_06:47:30-rados-wip-kefu-testing-2020-08-19-1141-distro-basic-smithi/5359038/

Actions #7

Updated by Varsha Rao over 3 years ago

ganesha log

Aug 20 11:28:02 smithi123 bash[40389]: 20/08/2020 11:28:02 : epoch 5f3e5d0a : smithi123 : ganesha.nfsd-1[main] create_export :FSAL :CRIT :Unable to mount Ceph cluster for /.
Aug 20 11:28:02 smithi123 bash[40389]: 20/08/2020 11:28:02 : epoch 5f3e5d0a : smithi123 : ganesha.nfsd-1[main] mdcache_fsal_create_export :FSAL :MAJ :Failed to call create_export on underlying FSAL Ceph
Aug 20 11:28:02 smithi123 bash[40389]: 20/08/2020 11:28:02 : epoch 5f3e5d0a : smithi123 : ganesha.nfsd-1[main] fsal_put :FSAL :INFO :FSAL Ceph now unused
Aug 20 11:28:02 smithi123 bash[40389]: 20/08/2020 11:28:02 : epoch 5f3e5d0a : smithi123 : ganesha.nfsd-1[main] fsal_cfg_commit :CONFIG :CRIT :Could not create export for (/CephFS1) to (/)

mds log

Aug 20 11:22:32 smithi123 systemd[1]: ceph-b59aeb1e-e2d6-11ea-a073-001a4aab830c@mds.1.smithi123.lseqhu.service: Service hold-off time over, scheduling restart.
Aug 20 11:22:32 smithi123 systemd[1]: ceph-b59aeb1e-e2d6-11ea-a073-001a4aab830c@mds.1.smithi123.lseqhu.service: Scheduled restart job, restart counter is at 9.
Aug 20 11:22:32 smithi123 systemd[1]: Stopped Ceph mds.1.smithi123.lseqhu for b59aeb1e-e2d6-11ea-a073-001a4aab830c.
Aug 20 11:22:32 smithi123 systemd[1]: ceph-b59aeb1e-e2d6-11ea-a073-001a4aab830c@mds.1.smithi123.lseqhu.service: Start request repeated too quickly.
Aug 20 11:22:32 smithi123 systemd[1]: ceph-b59aeb1e-e2d6-11ea-a073-001a4aab830c@mds.1.smithi123.lseqhu.service: Failed with result 'exit-code'.
Aug 20 11:22:32 smithi123 systemd[1]: Failed to start Ceph mds.1.smithi123.lseqhu for b59aeb1e-e2d6-11ea-a073-001a4aab830c.

MDS daemon dies before cephfs export can be created.
http://qa-proxy.ceph.com/teuthology/varsha-2020-08-20_11:02:27-rados-wip-varsha-nfs-testing-distro-basic-smithi/

Actions #8

Updated by Varsha Rao over 3 years ago

  • Status changed from New to In Progress
Actions #9

Updated by Varsha Rao over 3 years ago

  • Category set to Testing
  • Status changed from In Progress to Fix Under Review
  • Target version set to v16.0.0
  • Backport set to octopus
  • Pull request ID set to 36740
  • Component(FS) mgr/nfs, qa-suite added
  • Labels (FS) NFS-cluster added
Actions #10

Updated by Varsha Rao over 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #11

Updated by Sebastian Wagner over 3 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF