Project

General

Profile

Quincy

On-call Schedule

  • Feb: Venky
  • Mar: Patrick
  • Apr: Jos
  • May: Xiubo
  • Jun: Rishabh
  • Jul: Kotresh
  • Aug: Milind
  • Sep: Leonid
  • Oct: Dhairya
  • Nov: Chris

2024 March 26

https://tracker.ceph.com/issues/65134
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-quincy-2024-03-14-0655-quincy

2024 Jan 31

https://pulpito.ceph.com/yuriw-2024-01-26_01:07:29-fs-wip-yuri4-testing-2024-01-25-1331-quincy-distro-default-smithi/

2024 Jan 17

https://pulpito.ceph.com/yuriw-2024-01-10_16:13:48-fs-wip-yuri6-testing-2024-01-05-0744-quincy-distro-default-smithi/

2024 Jan 12

https://pulpito.ceph.com/yuriw-2024-01-10_19:20:36-fs-wip-vshankar-testing1-quincy-2024-01-10-2010-quincy-distro-default-smithi/

2024 Jan 2

Re-run : https://pulpito.ceph.com/yuriw-2023-12-27_16:33:25-fs-wip-yuri-testing-2023-12-26-0957-quincy-distro-default-smithi/

2023 Dec 27

https://pulpito.ceph.com/yuriw-2023-12-26_19:48:51-fs-wip-yuri-testing-2023-12-26-0957-quincy-distro-default-smithi/

2023 Dec 21

https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-12-14-1108-quincy

2023 Dec 20

https://pulpito.ceph.com/?branch=wip-vshankar-testing1-2023-12-18-1207-reef-2
(Lots of centos/rhel related issues)

2023 December 14

https://pulpito.ceph.com/vshankar-2023-12-13_09:42:45-fs-wip-vshankar-testing3-2023-12-13-1225-quincy-testing-default-smithi/

2023 October 19

https://pulpito.ceph.com/?branch=wip-vshankar-testing-quincy-20231019.172112

2023 October 10

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-10-10-0720-quincy

2023 October 09

https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-10-06-0949-quincy

2023 October 06

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-10-06-0948-quincy

2023 October 03

https://pulpito.ceph.com/vshankar-2023-09-29_10:09:00-fs-wip-vshankar-testing-quincy-20230929.071619-testing-default-smithi/

2023 August 08

https://trello.com/c/ZjPC9CcN/1820-wip-yuri5-testing-2023-08-08-0807-quincy
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-08-08-0807-quincy

4 August 2023

https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-27-1336-quincy

25 July 2023

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-07-14-0724-quincy

2023 July 04

https://pulpito.ceph.com/yuriw-2023-07-03_15:34:02-fs-quincy_release-distro-default-smithi/

2023 June 14

http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/

2023 June 07

http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/

Failed to fetch package version

2023 May 24

https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/

2023 Apr 21/24

https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20230420.183701-quincy

2 Failures:
"Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=58e06d348d8a2da339540be5425a40ec7683e512 "

are a side-effect of revert https://github.com/ceph/ceph/pull/51029 . This is expected and should be fixed by the new backport that is reverted.

2023 Mar 02

https://pulpito.ceph.com/yuriw-2023-02-22_20:50:58-fs-wip-yuri4-testing-2023-02-22-0817-quincy-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-02-28_22:41:58-fs-wip-yuri10-testing-2023-02-28-0752-quincy-distro-default-smithi/

2023 Feb 17

https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-fs-wip-yuri3-testing-2023-02-16-0752-quincy-distro-default-smithi/

2023 Feb 16

https://pulpito.ceph.com/yuriw-2023-02-13_20:44:19-fs-wip-yuri8-testing-2023-02-07-0753-quincy-distro-default-smithi/

2023 Feb 15

https://pulpito.ceph.com/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/

2023 Feb 07

https://pulpito.ceph.com/yuriw-2023-02-03_23:44:47-fs-wip-yuri8-testing-2023-01-30-1510-quincy-distro-default-smithi/

2022 Oct 21

http://pulpito.front.sepia.ceph.com/yuriw-2022-10-17_17:37:24-fs-wip-yuri-testing-2022-10-17-0746-quincy-distro-default-smithi/

  • https://tracker.ceph.com/issues/57205
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
  • https://tracker.ceph.com/issues/57446
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
  • https://tracker.ceph.com/issues/55825
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

2022 Oct 17

http://pulpito.front.sepia.ceph.com/yuriw-2022-10-12_16:32:23-fs-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/

2022 Sep 29

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-09-23-1008-quincy

2022 Sep 09

http://pulpito.front.sepia.ceph.com/yuriw-2022-09-08_18:29:21-fs-wip-yuri6-testing-2022-09-08-0859-quincy-distro-default-smithi/

2022 Sep 02

https://pulpito.ceph.com/yuriw-2022-09-01_18:27:02-fs-wip-yuri11-testing-2022-09-01-0804-quincy-distro-default-smithi/

and

https://pulpito.ceph.com/?branch=wip-lflores-testing-2-2022-08-26-2240-quincy

2022 Aug 31

https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-08-23-1120-quincy

2022 Aug 17

https://pulpito.ceph.com/yuriw-2022-08-17_18:46:04-fs-wip-yuri7-testing-2022-08-17-0943-quincy-distro-default-smithi/

There were following errors not related to tests which fixed in rerun:

Rerun: https://pulpito.ceph.com/yuriw-2022-08-18_15:08:53-fs-wip-yuri7-testing-2022-08-17-0943-quincy-distro-default-smithi/

2022 Aug 10

http://pulpito.front.sepia.ceph.com/yuriw-2022-08-11_02:21:28-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
  • Most of the failures are passed in re-run. Please check rerun failures below.
    - tasks/{1-thrash/mon 2-workunit/fs/snaps - reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up
    - tasks/{1-thrash/osd 2-workunit/suites/iozone - reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up
    - tasks/metrics - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
    - tasks/scrub - No module named 'tasks.cephfs.fuse_mount'
    - tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} - No module named 'tasks.fs'
    - tasks/snap-schedule - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
    - tasks/volumes/{overrides test/clone}} - No module named 'tasks.ceph'
    - tasks/snapshots - CommandFailedError: Command failed on smithi035 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes - INFO:teuthology.orchestra.run.smithi035.stderr:E: Version '17.2.3-414-ge5c30ac2-1focal' for 'python-ceph' was not found - INFO:teuthology.orchestra.run.smithi035.stderr:E: Unable to locate package libcephfs1
    - tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} - No module named 'tasks.ceph'
    - tasks/{1-thrash/osd 2-workunit/suites/pjd}} - No module named 'tasks.ceph'
    - tasks/cfuse_workunit_suites_fsstress traceless/50pc} - No module named 'tasks'
    - tasks/{0-octopus 1-upgrade}} - No module named 'tasks'
    - tasks/{1-thrash/osd 2-workunit/fs/snaps}} - cluster [WRN] client.4520 isn't responding to mclientcaps(revoke),
    - tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} - reached maximum tries (90) after waiting for 540 seconds - teuthology.misc:7 of 8 OSDs are up

Re-run1: http://pulpito.front.sepia.ceph.com/yuriw-2022-08-11_14:24:26-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/

  • tasks/{1-thrash/mon 2-workunit/fs/snaps - reached maximum tries (90) after waiting for 540 seconds
    DEBUG:teuthology.misc:7 of 8 OSDs are up
  • http://tracker.ceph.com/issues/52624
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
  • https://tracker.ceph.com/issues/50223
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
  • tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} - reached maximum tries (90) after waiting for 540 seconds
    DEBUG:teuthology.misc:7 of 8 OSDs are up

Re-run2: http://pulpito.front.sepia.ceph.com/yuriw-2022-08-16_14:46:15-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/

2022 Aug 03

https://pulpito.ceph.com/yuriw-2022-08-04_11:54:20-fs-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi/
Re-run: https://pulpito.ceph.com/yuriw-2022-08-09_15:36:21-fs-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi

  • No module named 'tasks' - Fixed in re-run

2022 Jul 22

https://pulpito.ceph.com/yuriw-2022-07-11_13:37:40-fs-wip-yuri5-testing-2022-07-06-1020-quincy-distro-default-smithi/
re-run: https://pulpito.ceph.com/yuriw-2022-07-12_13:37:44-fs-wip-yuri5-testing-2022-07-06-1020-quincy-distro-default-smithi/
Most failure weren't seen in re-run.

2022 Jul 13

https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-testing-2022-07-08-0453-quincy-distro-default-smithi/

2022 Jun 08

http://pulpito.front.sepia.ceph.com/yuriw-2022-06-07_22:29:43-fs-wip-yuri3-testing-2022-06-07-0722-quincy-distro-default-smithi/

2022 Jun 07

http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:32:25-fs-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/

2022 Jun 03

https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-06-02-0810-quincy

2022 May 31

http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_21:58:39-fs-wip-yuri2-testing-2022-05-27-1033-quincy-distro-default-smithi/

2022 May 26

https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-05-10-1027-quincy

2022 May 10

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-05-05-0838-quincy

2022 April 29

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-04-22-0534-quincy

2022 April 13

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2022-04-11-0746-quincy

2022 March 31

http://pulpito.front.sepia.ceph.com/yuriw-2022-03-29_20:09:22-fs-wip-yuri-testing-2022-03-29-0741-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-30_14:35:58-fs-wip-yuri-testing-2022-03-29-0741-quincy-distro-default-smithi/

Handful of failed jobs due to:

Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c5bb4e7d582f118c1093d94fbfedfb197eaa03b4 -v bootstrap --fsid 44e07f86-b03b-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.55 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

2022 March 17

http://pulpito.front.sepia.ceph.com/yuriw-2022-03-14_18:57:01-fs-wip-yuri2-testing-2022-03-14-0946-quincy-distro-default-smithi/

Couple of jobs that are dead with:

    2022-03-15T05:15:22.447 ERROR:paramiko.transport:Socket exception: No route to host (113)
    2022-03-15T05:15:22.452 DEBUG:teuthology.orchestra.run:got remote process result: None
    2022-03-15T05:15:22.453 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.0...

2022 March 1

  • https://tracker.ceph.com/issues/51282 (maybe?)
    cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log
  • https://tracker.ceph.com/issues/52624
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
  • https://tracker.ceph.com/issues/54460
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi152 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'
  • https://tracker.ceph.com/issues/50223
    cluster [WRN] client.14480 isn't responding to mclientcaps(revoke), ino 0x1000000f3fd pending pAsLsXsFsc issued pAsLsXsFscb, sent 304.933510 seconds ago" in cluster log
  • https://tracker.ceph.com/issues/54461
    Command failed (workunit test suites/ffsb.sh) on smithi124 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'
  • https://tracker.ceph.com/issues/54462
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'