Pacific¶
On-call Schedule¶
- Jul: Venky
- Aug: Patrick
- Sep: Jos Collin
- Oct: Xiubo
- Nov: Rishabh
- Dec: Kotresh
- Jan: Milind
Reviews¶
ADD NEW ENTRY BELOW¶
20 Feb 2024¶
https://pulpito.ceph.com/yuriw-2024-02-17_16:03:50-fs-pacific-release-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2024-02-19_19:27:49-fs-pacific-release-distro-default-smithi/
- https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/62580
testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
31 Jan 2024¶
https://pulpito.ceph.com/?branch=pacific-release
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
24 Jan 2024¶
- https://tracker.ceph.com/issues/64059
https://download.ceph.com/qa/ior.tbz2 - ERROR 404: Not Found - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/51964
qa: test_cephfs_mirror_restart_sync_on_blocklist failure - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/50224
test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
29 Dec 2023¶
- https://tracker.ceph.com/issues/63212
qa: failed to download ior.tbz2 - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/50224
test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring) - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)" - https://tracker.ceph.com/issues/63907
cephfs-mirror: Mirror::update_fs_mirrors crashes while taking lock (coredump)
21 Dec 2023¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-12-14-1107-pacific
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/63539
fs/full/subvolume_clone.sh: Health check failed: 1 full osd(s) (OSD_FULL) - test_acls failed because known distro wasn't detected
- https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
4 Dec 2023 2¶
- ior/mdtest failures because packages were missing from download.ceph.com
- test_acls failed because known distro wasn't detected
- https://tracker.ceph.com/issues/63539
fs/full/subvolume_clone.sh: Health check failed: 1 full osd(s) (OSD_FULL) - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/37808
osd: osdmap cache weak_refs assert during shutdown
04 Dec 2023¶
- ior package was missing because it was deleted from download.ceph.com
- Error re-imaging machines:
https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-distro-default-smithi/7472083
https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-distro-default-smithi/7472084
https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-distro-default-smithi/7472104 - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/51282
cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/54462
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
22 Nov 2023¶
https://pulpito.ceph.com/yuriw-2023-11-14_20:31:57-fs-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-11-21_15:54:56-smoke-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/
- https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)" - ior/mdtest failures because packages were missing from download.ceph.com
- test_acls failed because known distro wasn't detected
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
15 Nov 2023¶
- https://tracker.ceph.com/issues/63539
fs/full/subvolume_clone.sh: Health check failed: 1 full osd(s) (OSD_FULL) - test_acls: distro name for RHEL 8.4 wasn't recognized one by xfstests_dev.py
- ior package was missing because it was deleted from download.ceph.com
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
8 Nov 2023¶
fs: https://pulpito.ceph.com/vshankar-2023-11-06_07:50:57-fs-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
smoke: https://pulpito.ceph.com/vshankar-2023-11-06_07:53:57-smoke-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
- https://tracker.ceph.com/issues/51964
qa: test_cephfs_mirror_restart_sync_on_blocklist failure - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
2023 September 12¶
- https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/62580
testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
2023 August 31¶
https://github.com/ceph/ceph/pull/53189
https://github.com/ceph/ceph/pull/53243
https://github.com/ceph/ceph/pull/53185
https://github.com/ceph/ceph/pull/52744
https://github.com/ceph/ceph/pull/51045
- https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/54462
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 - https://tracker.ceph.com/issues/50222
osd: 5.2s0 deep-scrub : stat mismatch - https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
Some spurious infrastructure / valgrind noise during cleanup.
2023 August 22¶
Pacific v16.2.14 QA
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
- https://tracker.ceph.com/issues/62578
mon: osd pg-upmap-items command causes PG_DEGRADED warnings - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/62579
client: evicted warning because client completes unmount before thrashed MDS comes back - https://tracker.ceph.com/issues/62580
testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
2023 August 16-2¶
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
2023 August 16¶
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
- https://tracker.ceph.com/issues/62499
testing (?): deadlock ffsb task - https://tracker.ceph.com/issues/62501
pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
2023 August 11¶
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
Some infra noise causes dead job.
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/58340
fsstress.sh failed with errno 124 - https://tracker.ceph.com/issues/48773
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete - https://tracker.ceph.com/issues/50527
pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
2023 August 8¶
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)" - https://tracker.ceph.com/issues/62164
qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..." - https://tracker.ceph.com/issues/51964
qa: test_cephfs_mirror_restart_sync_on_blocklist failure - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/62465
pacific (?): LibCephFS.ShutdownRace segmentation fault - "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
2023 August 03¶
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs)
2023 July 25¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)" - https://tracker.ceph.com/issues/62160
mds: MDS abort because newly corrupt dentry to be committed - https://tracker.ceph.com/issues/61201
qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
2023 May 17¶
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/61201 (NEW)
Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan) - https://tracker.ceph.com/issues/58340
fsstress.sh failed with errno 124 - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/54462
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 - https://tracker.ceph.com/issues/58674
teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds - https://tracker.ceph.com/issues/55446
fs/upgrade/mds_upgrade_sequence - hit max job timeout
2023 May 11¶
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/51964
qa: test_cephfs_mirror_restart_sync_on_blocklist failure - https://tracker.ceph.com/issues/48773
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)" - https://tracker.ceph.com/issues/58992
test_acls (tasks.cephfs.test_acls.TestACLs) - https://tracker.ceph.com/issues/58340
fsstress.sh failed with errno 124 - https://tracker.ceph.com/issues/51964
Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring) - https://tracker.ceph.com/issues/55446
fs/upgrade/mds_upgrade_sequence - hit max job timeout
2023 May 4¶
- https://tracker.ceph.com/issues/59560
qa: RuntimeError: more than one file system available - https://tracker.ceph.com/issues/59626
FSMissing: File system xxxx does not exist in the map - https://tracker.ceph.com/issues/58340
fsstress.sh failed with errno 124 - https://tracker.ceph.com/issues/58992
test_acls - https://tracker.ceph.com/issues/48773
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete - https://tracker.ceph.com/issues/57676
qa: error during scrub thrashing: rank damage found: {'backtrace'}
2023 Apr 13¶
https://tracker.ceph.com/issues/52624cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
- https://tracker.ceph.com/issues/57594
Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan) - https://tracker.ceph.com/issues/54108
qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" - https://tracker.ceph.com/issues/58340
fsstress.sh failed with errno 125 - https://tracker.ceph.com/issues/54462
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 - https://tracker.ceph.com/issues/49287
cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - https://tracker.ceph.com/issues/58726
test_acls: expected a yum based or a apt based system
2022 Dec 07¶
many transient git.ceph.com related timeouts
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/50224
test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring) - https://tracker.ceph.com/issues/56644
qa: test_rapid_creation fails with "No space left on device" - https://tracker.ceph.com/issues/58221
pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
2022 Dec 02¶
many transient git.ceph.com related timeouts
many transient 'Failed to connect to the host via ssh' failures
- https://tracker.ceph.com/issues/57723
pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
2022 Dec 01¶
many transient git.ceph.com related timeouts
- https://tracker.ceph.com/issues/57723
pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
2022 Nov 18¶
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
2 ansible dead failures.
12 transient git.ceph.com related timeouts
- https://tracker.ceph.com/issues/57723
pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
2022 Oct 19¶
- https://tracker.ceph.com/issues/57723
pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/56644
qa: test_rapid_creation fails with "No space left on device" - https://tracker.ceph.com/issues/54460
snaptest-multiple-capsnaps.sh test failure - https://tracker.ceph.com/issues/57892
sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
2022 Oct 06¶
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)" - https://tracker.ceph.com/issues/56507
Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2022 Sep 27¶
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/48773
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete - https://tracker.ceph.com/issues/50224
test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring) - https://tracker.ceph.com/issues/56507
Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
2022 Sep 22¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/51282
cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
2022 Sep 19¶
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/57594
pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
2022 Sep 15¶
- https://tracker.ceph.com/issues/51282
cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/48773
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
2022 AUG 18¶
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/- https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
- https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
- https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
- https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
- https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
- https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
- https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
2022 AUG 11¶
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/- Most of the failures are passed in re-run. Please check rerun failures below.
- https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
- https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
- https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
- https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
- https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04}
- https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
- https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
- https://tracker.ceph.com/issues/57083
- https://tracker.ceph.com/issues/53360
tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) - https://tracker.ceph.com/issues/56507
Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2022 AUG 04¶
- https://tracker.ceph.com/issues/57087
test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) - https://tracker.ceph.com/issues/52624
cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - https://tracker.ceph.com/issues/51267
tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status - https://tracker.ceph.com/issues/53360
- https://tracker.ceph.com/issues/57083
qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client' - https://tracker.ceph.com/issues/56507
Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2022 July 15¶
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/57083
- https://tracker.ceph.com/issues/53360
tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) - https://tracker.ceph.com/issues/56507
pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2022 July 08¶
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) - https://tracker.ceph.com/issues/56506
pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan) - https://tracker.ceph.com/issues/56507
pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2022 Jun 28¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2022 Jun 22¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2022 Jun 17¶
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2022 Jun 16¶
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/55449
pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh - https://tracker.ceph.com/issues/51267
CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... - https://tracker.ceph.com/issues/55332
Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2022 Jun 15¶
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/55449
pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
2022 Jun 10¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/55449
pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
2022 Jun 09¶
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/55449
pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
2022 May 06¶
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
2022 April 18¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
(only mgr/snap_schedule backport pr)
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2022 March 28¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/54411
mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
33 daemons have recently crashed" during suites/fsstress.sh
2022 March 25¶
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/52606
qa: test_dirfrag_limit - https://tracker.ceph.com/issues/51905
qa: "error reading sessionmap 'mds1_sessionmap'" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2022 March 22¶
- https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/52606
qa: test_dirfrag_limit - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) - https://tracker.ceph.com/issues/51905
qa: "error reading sessionmap 'mds1_sessionmap'" - https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" - https://tracker.ceph.com/issues/54411
mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
33 daemons have recently crashed" during suites/fsstress.sh
2021 November 22¶
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
- https://tracker.ceph.com/issues/53300
qa: cluster [WRN] Scrub error on inode - https://tracker.ceph.com/issues/53302
qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1 - https://tracker.ceph.com/issues/53314
qa: fs/upgrade/mds_upgrade_sequence test timeout - https://tracker.ceph.com/issues/53316
qa: (smithi150) slow request osd_op, currently waiting for sub ops warning - https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/52396
pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) - https://tracker.ceph.com/issues/52875
pacific: qa: test_dirfrag_limit - https://tracker.ceph.com/issues/51705
pacific: qa: tasks.cephfs.fuse_mount:mount command failed - https://tracker.ceph.com/issues/39634
qa: test_full_same_file timeout - https://tracker.ceph.com/issues/49748
gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds - https://tracker.ceph.com/issues/51964
qa: test_cephfs_mirror_restart_sync_on_blocklist failure - https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
2021 November 20¶
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
- https://tracker.ceph.com/issues/53360
pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
2021 September 14 (QE)¶
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
- https://tracker.ceph.com/issues/52606
qa: test_dirfrag_limit - https://tracker.ceph.com/issues/52607
qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed
2021 Sep 7¶
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
- https://tracker.ceph.com/issues/52396
qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed
2021 Aug 30¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed - https://tracker.ceph.com/issues/52396
qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) - https://tracker.ceph.com/issues/52487
qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation) - https://tracker.ceph.com/issues/51267
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) - https://tracker.ceph.com/issues/48772
qa: pjd: not ok 9, 44, 80
2021 Aug 23¶
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed - https://tracker.ceph.com/issues/52396
qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) - https://tracker.ceph.com/issues/52397
qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
2021 Aug 11¶
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed - https://tracker.ceph.com/issues/50222
osd: 5.2s0 deep-scrub : stat mismatch
2021 July 15¶
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) - https://tracker.ceph.com/issues/50528
qa: fs:thrash: pjd suite not ok 80 - https://tracker.ceph.com/issues/51706
qa: osd deep-scrub stat mismatch
2021 July 13¶
- https://tracker.ceph.com/issues/51704
Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth) - https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/51705
qa: tasks.cephfs.fuse_mount:mount command failed - https://tracker.ceph.com/issues/48640
qa: snapshot mismatch during mds thrashing
2021 June 29 (Integration Branch)¶
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/50260
pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" - https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2021 June 28¶
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/51440
fallocate fails with EACCES - https://tracker.ceph.com/issues/51264
TestVolumeClient failure - https://tracker.ceph.com/issues/51266
test cleanup failure - https://tracker.ceph.com/issues/51183
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2021 June 14¶
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://bugzilla.redhat.com/show_bug.cgi?id=1973276
Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com - https://tracker.ceph.com/issues/51263
pjdfstest rename test 10.t failed with EACCES - https://tracker.ceph.com/issues/51264
TestVolumeClient failure - https://tracker.ceph.com/issues/51266
Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtestls ; rmdir -/home/ubuntu/cephtest' - https://tracker.ceph.com/issues/50279
qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" - https://tracker.ceph.com/issues/51267
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
2021 June 07 (Integration Branch)¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/50279
qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" - https://tracker.ceph.com/issues/48773
qa: scrub does not complete - https://tracker.ceph.com/issues/51170
pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split' - https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure
2021 Apr 28 (QE pre-release)¶
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/50258
pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" - https://tracker.ceph.com/issues/50260
pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" - https://tracker.ceph.com/issues/49962
'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes - https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" - https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure - https://tracker.ceph.com/issues/50528
pacific: qa: fs:thrash: pjd suite not ok 20
2021 Apr 22 (Integration Branch)¶
- https://tracker.ceph.com/issues/50527
pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse) - https://tracker.ceph.com/issues/50528
pacific: qa: fs:thrash: pjd suite not ok 20 - https://tracker.ceph.com/issues/49500 (fixed in another integration run)
qa: "Assertion `cb_done' failed." - https://tracker.ceph.com/issues/49500
qa: "Assertion `cb_done' failed." - https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure - https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/50279
qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" - https://tracker.ceph.com/issues/50258
pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" - https://tracker.ceph.com/issues/49962
'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes - https://tracker.ceph.com/issues/49962
'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes - https://tracker.ceph.com/issues/50530
pacific: client: abort after MDS blocklist
2021 Apr 21 (Integration Branch)¶
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" - https://tracker.ceph.com/issues/50258
pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" - https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure - https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" - https://tracker.ceph.com/issues/50495
pacific: client: shutdown race fails with status 141
2021 Apr 07 (Integration Branch)¶
- https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed - https://tracker.ceph.com/issues/48805
mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" - https://tracker.ceph.com/issues/49500
qa: "Assertion `cb_done' failed." - https://tracker.ceph.com/issues/50258 (new)
pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" - https://tracker.ceph.com/issues/49962
'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes - https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure - https://tracker.ceph.com/issues/50260
pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" - https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"