Project

General

Profile

Actions

Bug #54210

closed

pacific: mon/pg_autoscaler.sh: echo failed on "bash -c 'ceph osd pool get a pg_num | grep 256'"

Added by Laura Flores about 2 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2021-12-22T11:43:16.219 INFO:tasks.workunit:Running workunits matching mon/pg_autoscaler.sh on client.0...
2021-12-22T11:43:16.220 INFO:tasks.workunit:Running workunit mon/pg_autoscaler.sh...
2021-12-22T11:43:16.221 DEBUG:teuthology.orchestra.run.smithi027:workunit test mon/pg_autoscaler.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c139666c619f8451c172537870108109e3c4c2ce TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh
2021-12-22T11:43:16.278 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd ls
2021-12-22T11:43:16.278 INFO:tasks.workunit.client.0.smithi027.stderr:++ wc -l
2021-12-22T11:43:16.691 INFO:tasks.workunit.client.0.smithi027.stderr:+ NUM_OSDS=8
2021-12-22T11:43:16.691 INFO:tasks.workunit.client.0.smithi027.stderr:+ '[' 8 -lt 6 ']'
2021-12-22T11:43:16.692 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool ls
2021-12-22T11:43:16.692 INFO:tasks.workunit.client.0.smithi027.stderr:++ wc -l
2021-12-22T11:43:17.096 INFO:tasks.workunit.client.0.smithi027.stderr:+ NUM_POOLS=0
2021-12-22T11:43:17.096 INFO:tasks.workunit.client.0.smithi027.stderr:+ '[' 0 -gt 0 ']'
2021-12-22T11:43:17.097 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph config set mgr mgr/pg_autoscaler/sleep_interval 5
2021-12-22T11:43:17.510 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph mgr module enable pg_autoscaler
2021-12-22T11:43:19.394 INFO:tasks.workunit.client.0.smithi027.stderr:module 'pg_autoscaler' is already enabled (always-on)
2021-12-22T11:43:19.406 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph osd pool create a 16 --pg-num-min 4
2021-12-22T11:43:20.520 INFO:tasks.workunit.client.0.smithi027.stderr:pool 'a' already exists
2021-12-22T11:43:20.530 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph osd pool create b 16 --pg-num-min 2
2021-12-22T11:43:21.525 INFO:tasks.workunit.client.0.smithi027.stderr:pool 'b' already exists
2021-12-22T11:43:21.534 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph osd pool set a pg_autoscale_mode on
2021-12-22T11:43:23.406 INFO:tasks.workunit.client.0.smithi027.stderr:set pool 1 pg_autoscale_mode to on
2021-12-22T11:43:23.418 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph osd pool set b pg_autoscale_mode on
2021-12-22T11:43:25.440 INFO:tasks.workunit.client.0.smithi027.stderr:set pool 2 pg_autoscale_mode to on
2021-12-22T11:43:25.453 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool ls
2021-12-22T11:43:25.453 INFO:tasks.workunit.client.0.smithi027.stderr:++ wc -l
2021-12-22T11:43:25.865 INFO:tasks.workunit.client.0.smithi027.stderr:+ NUM_POOLS=2
2021-12-22T11:43:25.866 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool autoscale-status
2021-12-22T11:43:25.866 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep a
2021-12-22T11:43:25.866 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep -o -m 1 'scale-up\|scale-down'
2021-12-22T11:43:26.271 INFO:tasks.workunit.client.0.smithi027.stderr:+ PROFILE1=scale-up
2021-12-22T11:43:26.272 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool autoscale-status
2021-12-22T11:43:26.272 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep b
2021-12-22T11:43:26.272 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep -o -m 1 'scale-up\|scale-down'
2021-12-22T11:43:26.686 INFO:tasks.workunit.client.0.smithi027.stderr:+ PROFILE2=scale-up
2021-12-22T11:43:26.687 INFO:tasks.workunit.client.0.smithi027.stderr:+ [[ scale-up = \s\c\a\l\e\-\u\p ]]
2021-12-22T11:43:26.687 INFO:tasks.workunit.client.0.smithi027.stderr:+ echo 'Success: pool a PROFILE is scale-up'
2021-12-22T11:43:26.687 INFO:tasks.workunit.client.0.smithi027.stdout:Success: pool a PROFILE is scale-up
2021-12-22T11:43:26.688 INFO:tasks.workunit.client.0.smithi027.stderr:+ [[ scale-up = \s\c\a\l\e\-\u\p ]]
2021-12-22T11:43:26.688 INFO:tasks.workunit.client.0.smithi027.stdout:Success: pool b PROFILE is scale-up
2021-12-22T11:43:26.689 INFO:tasks.workunit.client.0.smithi027.stderr:+ echo 'Success: pool b PROFILE is scale-up'
2021-12-22T11:43:26.689 INFO:tasks.workunit.client.0.smithi027.stderr:+ ceph osd pool set autoscale-profile scale-down
2021-12-22T11:43:27.089 INFO:tasks.workunit.client.0.smithi027.stderr:autoscale-profile is already a scale-down!
2021-12-22T11:43:27.099 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool autoscale-status
2021-12-22T11:43:27.099 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep a
2021-12-22T11:43:27.100 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep -o -m 1 'scale-up\|scale-down'
2021-12-22T11:43:27.506 INFO:tasks.workunit.client.0.smithi027.stderr:+ PROFILE1=scale-down
2021-12-22T11:43:27.507 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool autoscale-status
2021-12-22T11:43:27.507 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep b
2021-12-22T11:43:27.507 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep -o -m 1 'scale-up\|scale-down'
2021-12-22T11:43:27.913 INFO:tasks.workunit.client.0.smithi027.stderr:+ PROFILE2=scale-down
2021-12-22T11:43:27.914 INFO:tasks.workunit.client.0.smithi027.stderr:+ [[ scale-down = \s\c\a\l\e\-\d\o\w\n ]]
2021-12-22T11:43:27.914 INFO:tasks.workunit.client.0.smithi027.stderr:+ echo 'Success: pool a PROFILE is scale-down'
2021-12-22T11:43:27.914 INFO:tasks.workunit.client.0.smithi027.stdout:Success: pool a PROFILE is scale-down
2021-12-22T11:43:27.915 INFO:tasks.workunit.client.0.smithi027.stderr:+ [[ scale-down = \s\c\a\l\e\-\d\o\w\n ]]
2021-12-22T11:43:27.915 INFO:tasks.workunit.client.0.smithi027.stdout:Success: pool b PROFILE is scale-down
2021-12-22T11:43:27.916 INFO:tasks.workunit.client.0.smithi027.stderr:+ echo 'Success: pool b PROFILE is scale-down'
2021-12-22T11:43:27.916 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool get a size
2021-12-22T11:43:27.917 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep -Eo '[0-9]{1,4}'
2021-12-22T11:43:28.323 INFO:tasks.workunit.client.0.smithi027.stderr:+ POOL_SIZE_A=2
2021-12-22T11:43:28.324 INFO:tasks.workunit.client.0.smithi027.stderr:++ ceph osd pool get b size
2021-12-22T11:43:28.324 INFO:tasks.workunit.client.0.smithi027.stderr:++ grep -Eo '[0-9]{1,4}'
2021-12-22T11:43:28.737 INFO:tasks.workunit.client.0.smithi027.stderr:+ POOL_SIZE_B=2
2021-12-22T11:43:28.738 INFO:tasks.workunit.client.0.smithi027.stderr:++ power2 200
2021-12-22T11:43:28.738 INFO:tasks.workunit.client.0.smithi027.stderr:++ echo 'x=l(200)/l(2); scale=0; 2^((x+0.5)/1)'
2021-12-22T11:43:28.738 INFO:tasks.workunit.client.0.smithi027.stderr:++ bc -l
2021-12-22T11:43:28.757 INFO:tasks.workunit.client.0.smithi027.stderr:+ TARGET_PG_B=256
2021-12-22T11:43:28.757 INFO:tasks.workunit.client.0.smithi027.stderr:+ wait_for 120 'ceph osd pool get a pg_num | grep 256'
2021-12-22T11:43:28.758 INFO:tasks.workunit.client.0.smithi027.stderr:+ local sec=120
2021-12-22T11:43:28.758 INFO:tasks.workunit.client.0.smithi027.stderr:+ local 'cmd=ceph osd pool get a pg_num | grep 256'
2021-12-22T11:43:28.758 INFO:tasks.workunit.client.0.smithi027.stderr:+ true
2021-12-22T11:43:28.758 INFO:tasks.workunit.client.0.smithi027.stderr:+ bash -c 'ceph osd pool get a pg_num | grep 256'
2021-12-22T11:43:29.169 INFO:tasks.workunit.client.0.smithi027.stderr:+ sec=119
2021-12-22T11:43:29.169 INFO:tasks.workunit.client.0.smithi027.stderr:+ '[' 119 -eq 0 ']'
2021-12-22T11:43:29.170 INFO:tasks.workunit.client.0.smithi027.stderr:+ sleep 1
2021-12-22T11:43:30.170 INFO:tasks.workunit.client.0.smithi027.stderr:+ true

similar actions continue ...

2021-12-22T11:46:18.175 INFO:tasks.workunit.client.0.smithi027.stderr:+ bash -c 'ceph osd pool get a pg_num | grep 256'
2021-12-22T11:46:18.596 INFO:tasks.workunit.client.0.smithi027.stderr:+ sec=0
2021-12-22T11:46:18.596 INFO:tasks.workunit.client.0.smithi027.stderr:+ '[' 0 -eq 0 ']'
2021-12-22T11:46:18.597 INFO:tasks.workunit.client.0.smithi027.stdout:failed
2021-12-22T11:46:18.597 DEBUG:teuthology.orchestra.run:got remote process result: 1
2021-12-22T11:46:18.598 INFO:tasks.workunit.client.0.smithi027.stderr:+ echo failed
2021-12-22T11:46:18.599 INFO:tasks.workunit.client.0.smithi027.stderr:+ return 1
2021-12-22T11:46:18.599 INFO:tasks.workunit:Stopping ['mon/pg_autoscaler.sh'] on client.0...
2021-12-22T11:46:18.600 DEBUG:teuthology.orchestra.run.smithi027:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0
2021-12-22T11:46:18.867 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/run_tasks.py", line 91, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/run_tasks.py", line 70, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_c139666c619f8451c172537870108109e3c4c2ce/qa/tasks/workunit.py", line 148, in task
    cleanup=cleanup)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_c139666c619f8451c172537870108109e3c4c2ce/qa/tasks/workunit.py", line 298, in _spawn_on_all_clients
    timeout=timeout)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/parallel.py", line 84, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/parallel.py", line 98, in __next__
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/parallel.py", line 30, in resurrect_traceback
    raise exc.exc_info[1]
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/parallel.py", line 23, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_ceph-c_c139666c619f8451c172537870108109e3c4c2ce/qa/tasks/workunit.py", line 427, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/orchestra/remote.py", line 509, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/orchestra/run.py", line 183, in _raise_for_status
    node=self.hostname, label=self.label
teuthology.exceptions.CommandFailedError: Command failed (workunit test mon/pg_autoscaler.sh) on smithi027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
 && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c139666c619f8451c172537870108109e3c4c2ce TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh'

Also seen in pacific:
/a/yuriw-2022-02-05_22:51:11-rados-wip-yuri2-testing-2022-02-04-1646-pacific-distro-default-smithi/6663429

According to the Sentry events, this failure has been going on since November, 2021.


Related issues 1 (0 open1 closed)

Related to RADOS - Backport #54082: pacific: mon: osd pool create <pool-name> with --bulk flagResolvedKamoltat (Junior) SirivadhnaActions
Actions #1

Updated by Laura Flores about 2 years ago

  • Backport changed from pacific to quincy, pacific
Actions #2

Updated by Laura Flores about 2 years ago

  • Assignee set to Kamoltat (Junior) Sirivadhna

Junior, maybe you have an idea of what's going on?

Actions #3

Updated by Laura Flores about 2 years ago

Laura Flores wrote:

[...]

Also seen in pacific:
/a/yuriw-2022-02-05_22:51:11-rados-wip-yuri2-testing-2022-02-04-1646-pacific-distro-default-smithi/6663429

According to the Sentry events, this failure has been going on since November, 2021.

The pasted example is from /a/yuriw-2021-12-22_00:00:47-rados-wip-yuri4-testing-2021-12-21-1418-distro-default-smithi/6576796.

Actions #4

Updated by Kamoltat (Junior) Sirivadhna about 2 years ago

yep this is a bug, thanks for letting me know, patch coming up.

Actions #5

Updated by Kamoltat (Junior) Sirivadhna about 2 years ago

After going through sentry, I've realized that the only occurrence of this bug in master happens before the merge of
https://github.com/ceph/ceph/pull/44241 to master. After that this bug doesn't appear in master anymore, but still appears in pacific. I am pretty confident that the backport https://github.com/ceph/ceph/pull/44847 will resolve this bug by itself. So we are justing waiting for 4487 to merge.

Actions #6

Updated by Kamoltat (Junior) Sirivadhna about 2 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 44847
Actions #7

Updated by Neha Ojha about 2 years ago

  • Subject changed from mon/pg_autoscaler.sh: echo failed on "bash -c 'ceph osd pool get a pg_num | grep 256'" to pacific: mon/pg_autoscaler.sh: echo failed on "bash -c 'ceph osd pool get a pg_num | grep 256'"
  • Backport deleted (quincy, pacific)
Actions #8

Updated by Neha Ojha about 2 years ago

  • Related to Backport #54082: pacific: mon: osd pool create <pool-name> with --bulk flag added
Actions #9

Updated by Radoslaw Zarzynski about 2 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF