Project

General

Profile

Bug #48916

"File system None does not exist in the map" in upgrade:octopus-x:parallel-master

Added by Yuri Weinstein about 3 years ago. Updated about 3 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Run: https://pulpito.ceph.com/teuthology-2021-01-16_16:12:35-upgrade:octopus-x:parallel-master-distro-basic-smithi/
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2021-01-16_16:12:35-upgrade:octopus-x:parallel-master-distro-basic-smithi/5791822/teuthology.log

2021-01-16T16:37:18.554 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:18 smithi063 systemd[1]: Stopping Ceph mds.a for d3b22aa2-5817-11eb-8f90-001a4aab830c...
2021-01-16T16:37:18.555 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:18 smithi063 bash[37114]: debug 2021-01-16T16:37:18.379+0000 7fb4597c5700 -1 received  signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2021-01-16T16:37:18.555 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:18 smithi063 bash[37114]: debug 2021-01-16T16:37:18.379+0000 7fb4597c5700 -1 mds.a *** got signal Terminated ***
2021-01-16T16:37:19.523 INFO:teuthology.orchestra.run.smithi063.stderr:dumped fsmap epoch 24
2021-01-16T16:37:19.535 INFO:tasks.cephadm:Teardown begin
2021-01-16T16:37:19.535 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 33, in nested
    yield vars
  File "/home/teuthworker/src/github.com_yuriw_ceph_wip-yuriw-octopus-x-master/qa/tasks/cephadm.py", line 1261, in task
    healthy(ctx=ctx, config=config)
  File "/home/teuthworker/src/github.com_yuriw_ceph_wip-yuriw-octopus-x-master/qa/tasks/ceph.py", line 1452, in healthy
    ceph_fs.wait_for_daemons(timeout=300)
  File "/home/teuthworker/src/github.com_yuriw_ceph_wip-yuriw-octopus-x-master/qa/tasks/cephfs/filesystem.py", line 1031, in wait_for_daemons
    if self.are_daemons_healthy(status=status, skip_max_mds_check=skip_max_mds_check):
  File "/home/teuthworker/src/github.com_yuriw_ceph_wip-yuriw-octopus-x-master/qa/tasks/cephfs/filesystem.py", line 879, in are_daemons_healthy
    mds_map = self.get_mds_map(status=status)
  File "/home/teuthworker/src/github.com_yuriw_ceph_wip-yuriw-octopus-x-master/qa/tasks/cephfs/filesystem.py", line 780, in get_mds_map
    return status.get_fsmap(self.id)['mdsmap']
  File "/home/teuthworker/src/github.com_yuriw_ceph_wip-yuriw-octopus-x-master/qa/tasks/cephfs/filesystem.py", line 119, in get_fsmap
    raise FSMissing(fscid)
tasks.cephfs.filesystem.FSMissing: File system None does not exist in the map


Related issues

Related to Orchestrator - Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly Can't reproduce

History

#1 Updated by Yuri Weinstein about 3 years ago

  • Description updated (diff)

#2 Updated by Yuri Weinstein about 3 years ago

This is testing `blogbench`(https://github.com/yuriw/ceph/tree/wip-yuriw-octopus-x-master/qa/suites/upgrade/octopus-x/parallel)

meta:
- desc: |
   run a cephfs stress test
   mount ceph-fuse on client.2 before running workunit
workload:
  full_sequential:
  - sequential:
    - ceph-fuse:
    - print: "**** done end ceph-fuse" 
    - workunit:
        clients:
           client.2:
            - suites/blogbench.sh
    - print: "**** done end blogbench.yaml" 

#3 Updated by Neha Ojha about 3 years ago

Perhaps, you need to add

- cephadm.shell:
    host.a:
      - ceph orch apply mds a

see qa/suites/rados/cephadm/workunits/task/test_orch_cli.yaml

#4 Updated by Brad Hubbard about 3 years ago

I think this is the original issue.

2021-01-16T16:37:13.712 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:13 smithi063 docker[36954]: ceph-d3b22aa2-5817-11eb-8f90-001a4aab830c-mds.a
2021-01-16T16:37:13.712 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:13 smithi063 systemd[1]: Stopped Ceph mds.a for d3b22aa2-5817-11eb-8f90-001a4aab830c.
2021-01-16T16:37:13.712 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:13 smithi063 systemd[1]: Starting Ceph mds.a for d3b22aa2-5817-11eb-8f90-001a4aab830c...
2021-01-16T16:37:13.713 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:13 smithi063 docker[37102]: Error: No such container: ceph-d3b22aa2-5817-11eb-8f90-001a4aab830c-mds.a
2021-01-16T16:37:13.713 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:13 smithi063 systemd[1]: Started Ceph mds.a for d3b22aa2-5817-11eb-8f90-001a4aab830c.
2021-01-16T16:37:13.713 INFO:journalctl@ceph.mds.a.smithi063.stdout:Jan 16 16:37:13 smithi063 bash[37114]: Error: No such container: ceph-d3b22aa2-5817-11eb-8f90-001a4aab830c-mds.a

#5 Updated by Brad Hubbard about 3 years ago

  • Project changed from Ceph to Orchestrator

Looks like a cephadm issue?

#6 Updated by Varsha Rao about 3 years ago

This issue is related to https://tracker.ceph.com/issues/45595

#7 Updated by Sebastian Wagner about 3 years ago

  • Related to Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly added

#8 Updated by Sebastian Wagner about 3 years ago

  • Status changed from New to Duplicate

Also available in: Atom PDF