Project

General

Profile

Pacific » History » Revision 113

Revision 112 (Patrick Donnelly, 09/01/2023 07:01 PM) → Revision 113/134 (Patrick Donnelly, 09/01/2023 07:08 PM)

h1. Pacific 

 h2. On-call Schedule 

 * Jul: Venky 
 * Aug: Patrick 
 * Sep: Jos 
 * Oct: Xiubo 
 * Nov: Rishabh 
 * Dec: Kotresh 
 * Jan: Milind 

 h2. Reviews 

 

 h3. 2023 August 31 

 https://github.com/ceph/ceph/pull/53189 
 https://github.com/ceph/ceph/pull/53243 
 https://github.com/ceph/ceph/pull/53185 
 https://github.com/ceph/ceph/pull/52744 
 https://github.com/ceph/ceph/pull/51045 

 https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/62501 
     pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) 
 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/54462 
     Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
 * https://tracker.ceph.com/issues/50222 
     osd: 5.2s0 deep-scrub : stat mismatch 
 * https://tracker.ceph.com/issues/50250 
     mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") 

 Some spurious infrastructure / valgrind noise during cleanup. 

 

 h3. 2023 August 22 

 Pacific v16.2.14 QA 

 https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/ 
 https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/62578 
     mon: osd pg-upmap-items command causes PG_DEGRADED warnings 
 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/62501 
     pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) 
 * https://tracker.ceph.com/issues/58992 
     test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/62579 
     client: evicted warning because client completes unmount before thrashed MDS comes back 
 * https://tracker.ceph.com/issues/62580 
     testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) 

 h3. 2023 August 16-2 

 https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific 
 https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/58992 
     test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/62501 
     pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC 

 h3. 2023 August 16 

 https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa 
 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific 

 * https://tracker.ceph.com/issues/62499 
     testing (?): deadlock ffsb task 
 * https://tracker.ceph.com/issues/62501 
     pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC 

 h3. 2023 August 11 

 https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific 
 https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/ 

 Some infra noise causes dead job. 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/58992 
     test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/58340 
     fsstress.sh failed with errno 124  
 * https://tracker.ceph.com/issues/48773 
     Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete 
 * https://tracker.ceph.com/issues/50527 
     pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse) 

 h3. 2023 August 8 

 https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/50223 
     qa: "client.4737 isn't responding to mclientcaps(revoke)" 
 * https://tracker.ceph.com/issues/62164 
     qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..." 
 * https://tracker.ceph.com/issues/51964 
     qa: test_cephfs_mirror_restart_sync_on_blocklist failure 
 * https://tracker.ceph.com/issues/58992 
     test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/62465 
     pacific (?): LibCephFS.ShutdownRace segmentation fault 
 * "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash. 

 h3. 2023 August 03 

 https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p 
 https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/58992 
     test_acls (tasks.cephfs.test_acls.TestACLs) 

 h3. 2023 July 25 

 https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/58992 
     test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/50223 
     qa: "client.4737 isn't responding to mclientcaps(revoke)" 
 * https://tracker.ceph.com/issues/62160 
     mds: MDS abort because newly corrupt dentry to be committed 
 * https://tracker.ceph.com/issues/61201 
     qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash) 


 h3. 2023 May 17 

 https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ 
 https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues) 

 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/61201 (NEW) 
   Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan) 
 * https://tracker.ceph.com/issues/58340 
   fsstress.sh failed with errno 124  
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/58992 
   test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/54462 
   Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128  
 * https://tracker.ceph.com/issues/58674 
   teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds  
 * https://tracker.ceph.com/issues/55446 
   fs/upgrade/mds_upgrade_sequence - hit max job timeout 
 


 h3. 2023 May 11 

 https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi 

 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/51964 
   qa: test_cephfs_mirror_restart_sync_on_blocklist failure 
 * https://tracker.ceph.com/issues/48773 
   Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete 
 * https://tracker.ceph.com/issues/50223 
   qa: "client.4737 isn't responding to mclientcaps(revoke)" 
 * https://tracker.ceph.com/issues/58992 
   test_acls (tasks.cephfs.test_acls.TestACLs) 
 * https://tracker.ceph.com/issues/58340 
   fsstress.sh failed with errno 124 
 * https://tracker.ceph.com/issues/51964 
   Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring) 
 * https://tracker.ceph.com/issues/55446 
   fs/upgrade/mds_upgrade_sequence - hit max job timeout 

 h3. 2023 May 4 

 https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/59560 
   qa: RuntimeError: more than one file system available 
 * https://tracker.ceph.com/issues/59626 
   FSMissing: File system xxxx does not exist in the map 
 * https://tracker.ceph.com/issues/58340 
   fsstress.sh failed with errno 124 
 * https://tracker.ceph.com/issues/58992 
   test_acls 
 * https://tracker.ceph.com/issues/48773 
   Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete 
 * https://tracker.ceph.com/issues/57676 
   qa: error during scrub thrashing: rank damage found: {'backtrace'} 


 h3. 2023 Apr 13 

 https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/ 

 https://tracker.ceph.com/issues/52624 
 cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/57594 
   Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan) 
 * https://tracker.ceph.com/issues/54108 
   qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"  
 * https://tracker.ceph.com/issues/58340 
   fsstress.sh failed with errno 125 
 * https://tracker.ceph.com/issues/54462 
   Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
 * https://tracker.ceph.com/issues/49287      
   cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found 
 * https://tracker.ceph.com/issues/58726 
   test_acls: expected a yum based or a apt based system 

 h3. 2022 Dec 07 

 https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/ 

 many transient git.ceph.com related timeouts 

 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/50224 
     test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring) 
 * https://tracker.ceph.com/issues/56644 
     qa: test_rapid_creation fails with "No space left on device" 
 * https://tracker.ceph.com/issues/58221 
     pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) 

 h3. 2022 Dec 02 

 many transient git.ceph.com related timeouts 
 many transient 'Failed to connect to the host via ssh' failures 

 * https://tracker.ceph.com/issues/57723 
   pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails 
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 

 h3. 2022 Dec 01 

 many transient git.ceph.com related timeouts 

 * https://tracker.ceph.com/issues/57723 
   pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails 
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 

 h3. 2022 Nov 18 

 https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif 
 https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/ 

 2 ansible dead failures. 
 12 transient git.ceph.com related timeouts 

 * https://tracker.ceph.com/issues/57723 
   pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails 
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 

 h3. 2022 Oct 19 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/57723 
   pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails 
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/56644 
   qa: test_rapid_creation fails with "No space left on device" 
 * https://tracker.ceph.com/issues/54460 
   snaptest-multiple-capsnaps.sh test failure 
 * https://tracker.ceph.com/issues/57892 
   sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase 



 h3. 2022 Oct 06 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/50223 
	 qa: "client.4737 isn't responding to mclientcaps(revoke)" 
 * https://tracker.ceph.com/issues/56507 
   Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 

 h3. 2022 Sep 27 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/48773 
     Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete 
 * https://tracker.ceph.com/issues/50224 
     test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring) 
 * https://tracker.ceph.com/issues/56507 
     Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 
 * https://tracker.ceph.com/issues/50223 
	 qa: "client.4737 isn't responding to mclientcaps(revoke)" 

 h3. 2022 Sep 22 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/51282 
     cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log 
 * https://tracker.ceph.com/issues/53360 
     pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 

 h3. 2022 Sep 19 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/57594 
     pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan) 

 h3. 2022 Sep 15 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/51282 
     cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log 
 * https://tracker.ceph.com/issues/52624 
     cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/53360 
     pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/48773 
     Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete 

 h3. 2022 AUG 18 

 https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/ 
 * https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 
 * https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete 
 * https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure 
 * https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails 
 * https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 


 Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/ 
 * https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh 


 h3. 2022 AUG 11 

 https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/ 
 * Most of the failures are passed in re-run. Please check rerun failures below. 
   - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash 
   - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
   - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log 
   - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring) 
   - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04}  
   - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
   - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)    

 Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/ 
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run. 
 * https://tracker.ceph.com/issues/57083 
 * https://tracker.ceph.com/issues/53360 
   tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed 
   client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/51183 
   Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
 * https://tracker.ceph.com/issues/56507 
   Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 


 h3. 2022 AUG 04 

 https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/57087 
   test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) 
 * https://tracker.ceph.com/issues/52624 
   cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
 * https://tracker.ceph.com/issues/51267 
   tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status       
 * https://tracker.ceph.com/issues/53360 
 * https://tracker.ceph.com/issues/57083 
   qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client' 
 * https://tracker.ceph.com/issues/56507 
   Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 


 h3. 2022 July 15 

 https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi 
 Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/57083 
 * https://tracker.ceph.com/issues/53360 
         tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed 
         client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"  
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
 * https://tracker.ceph.com/issues/56507 
         pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 


 h3. 2022 July 08 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
 * https://tracker.ceph.com/issues/56506 
         pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan) 
 * https://tracker.ceph.com/issues/56507 
         pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) 

 h3. 2022 Jun 28 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 

 h3. 2022 Jun 22 

 https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 

 h3. 2022 Jun 17 

 https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 

 h3. 2022 Jun 16 

 https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/55449 
         pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh 
 * https://tracker.ceph.com/issues/51267 
         CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... 
 * https://tracker.ceph.com/issues/55332 
         Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) 

 h3. 2022 Jun 15 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/55449 
         pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh 


 h3. 2022 Jun 10 

 https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/55449 
         pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh 

 h3. 2022 Jun 09 

 https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/55449 
         pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh 

 h3. 2022 May 06 

 https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 

 h3. 2022 April 18 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific 
 (only mgr/snap_schedule backport pr) 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 

 h3. 2022 March 28 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/54411 
	 mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 
	 33 daemons have recently crashed" during suites/fsstress.sh 

 h3. 2022 March 25 

 https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/ 
 https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/52606 
         qa: test_dirfrag_limit 
 * https://tracker.ceph.com/issues/51905 
         qa: "error reading sessionmap 'mds1_sessionmap'" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 

 h3. 2022 March 22 

 https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/ 

 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/52606 
         qa: test_dirfrag_limit 
 * https://tracker.ceph.com/issues/51183 
         Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
 * https://tracker.ceph.com/issues/51905 
         qa: "error reading sessionmap 'mds1_sessionmap'" 
 * https://tracker.ceph.com/issues/53360 
         pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
 * https://tracker.ceph.com/issues/54411 
	 mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 
	 33 daemons have recently crashed" during suites/fsstress.sh 

 h3. 2021 November 22 

 http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi 
 http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi 
 http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi 


 * https://tracker.ceph.com/issues/53300 
	 qa: cluster [WRN] Scrub error on inode 
 * https://tracker.ceph.com/issues/53302 
	 qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1 
 * https://tracker.ceph.com/issues/53314 
	 qa: fs/upgrade/mds_upgrade_sequence test timeout 
 * https://tracker.ceph.com/issues/53316 
	 qa: (smithi150) slow request osd_op, currently waiting for sub ops warning 
 * https://tracker.ceph.com/issues/52624 
	 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/52396 
	 pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) 
 * https://tracker.ceph.com/issues/52875 
	 pacific: qa: test_dirfrag_limit 
 * https://tracker.ceph.com/issues/51705 
	 pacific: qa: tasks.cephfs.fuse_mount:mount command failed 
 * https://tracker.ceph.com/issues/39634 
	 qa: test_full_same_file timeout 
 * https://tracker.ceph.com/issues/49748 
	 gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds 
 * https://tracker.ceph.com/issues/51964 
	 qa: test_cephfs_mirror_restart_sync_on_blocklist failure 
 * https://tracker.ceph.com/issues/50223 
	 qa: "client.4737 isn't responding to mclientcaps(revoke)" 


 h3. 2021 November 20 

 https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific 

 * https://tracker.ceph.com/issues/53360 
     pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 

 h3. 2021 September 14 (QE) 

 https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/ 
 https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/52606 
     qa: test_dirfrag_limit 
 * https://tracker.ceph.com/issues/52607 
     qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 

 h3. 2021 Sep 7 

 https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/ 
 https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/ 


 * https://tracker.ceph.com/issues/52396 
     qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 

 h3. 2021 Aug 30 

 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 
 * https://tracker.ceph.com/issues/52396 
     qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) 
 * https://tracker.ceph.com/issues/52487 
     qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation) 
 * https://tracker.ceph.com/issues/51267 
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) 
 * https://tracker.ceph.com/issues/48772 
    qa: pjd: not ok 9, 44, 80 


 h3. 2021 Aug 23 

 https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 
 * https://tracker.ceph.com/issues/52396 
     qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) 
 * https://tracker.ceph.com/issues/52397 
     qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed 


 h3. 2021 Aug 11 

 https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/ 
 https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 
 * https://tracker.ceph.com/issues/50222 
     osd: 5.2s0 deep-scrub : stat mismatch 

 h3. 2021 July 15 

 https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 
 * https://tracker.ceph.com/issues/51183 
     Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
 * https://tracker.ceph.com/issues/50528 
     qa: fs:thrash: pjd suite not ok 80 
 * https://tracker.ceph.com/issues/51706 
     qa: osd deep-scrub stat mismatch 

 h3. 2021 July 13 

 https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/51704 
     Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth) 
 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/51705 
     qa: tasks.cephfs.fuse_mount:mount command failed 
 * https://tracker.ceph.com/issues/48640 
     qa: snapshot mismatch during mds thrashing 

 h3. 2021 June 29 (Integration Branch) 

 https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/ 

 Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065 
 Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/50260 
     pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" 
 * https://tracker.ceph.com/issues/51183 
     qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' 


 h3. 2021 June 28 

 https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/51440 
     fallocate fails with EACCES 
 * https://tracker.ceph.com/issues/51264 
     TestVolumeClient failure 
 * https://tracker.ceph.com/issues/51266 
     test cleanup failure 
 * https://tracker.ceph.com/issues/51183 
     Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)  

 h3. 2021 June 14 

 https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/ 
 
 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://bugzilla.redhat.com/show_bug.cgi?id=1973276 
     Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com 
 * https://tracker.ceph.com/issues/51263 
     pjdfstest rename test 10.t failed with EACCES 
 * https://tracker.ceph.com/issues/51264 
     TestVolumeClient failure 
 * https://tracker.ceph.com/issues/51266 
     Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'  
 * https://tracker.ceph.com/issues/50279 
     qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" 
 * https://tracker.ceph.com/issues/51267 
     Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1 

 h3. 2021 June 07 (Integration Branch) 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/50279 
     qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" 
 * https://tracker.ceph.com/issues/48773 
     qa: scrub does not complete 
 * https://tracker.ceph.com/issues/51170 
     pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split' 
 * https://tracker.ceph.com/issues/48203 (stock kernel update required) 
     qa: quota failure 


 h3. 2021 Apr 28 (QE pre-release) 

 https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/ 
 https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/50258 
     pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" 
 * https://tracker.ceph.com/issues/50260 
     pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" 
 * https://tracker.ceph.com/issues/49962 
     'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes 
 * https://tracker.ceph.com/issues/50016 
     qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" 
 * https://tracker.ceph.com/issues/48203 (stock kernel update required) 
     qa: quota failure 
 * https://tracker.ceph.com/issues/50528 
     pacific: qa: fs:thrash: pjd suite not ok 20 


 h3. 2021 Apr 22 (Integration Branch) 

 https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/50527 
     pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse) 
 * https://tracker.ceph.com/issues/50528 
     pacific: qa: fs:thrash: pjd suite not ok 20 
 * https://tracker.ceph.com/issues/49500 (fixed in another integration run) 
     qa: "Assertion `cb_done' failed." 
 * https://tracker.ceph.com/issues/49500 
     qa: "Assertion `cb_done' failed." 
 * https://tracker.ceph.com/issues/48203 (stock kernel update required) 
     qa: quota failure 
 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/50279 
     qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" 
 * https://tracker.ceph.com/issues/50258 
     pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" 
 * https://tracker.ceph.com/issues/49962 
     'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes 
 * https://tracker.ceph.com/issues/49962 
     'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes 
 * https://tracker.ceph.com/issues/50530 
     pacific: client: abort after MDS blocklist 


 h3. 2021 Apr 21 (Integration Branch) 

 https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/50250 
     mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" 
 * https://tracker.ceph.com/issues/50258 
     pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" 
 * https://tracker.ceph.com/issues/48203 (stock kernel update required) 
     qa: quota failure 
 * https://tracker.ceph.com/issues/50016 
     qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" 
 * https://tracker.ceph.com/issues/50495 
     pacific: client: shutdown race fails with status 141 


 h3. 2021 Apr 07 (Integration Branch) 

 https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/ 

 * https://tracker.ceph.com/issues/45434 
     qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed 
 * https://tracker.ceph.com/issues/48805 
     mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" 
 * https://tracker.ceph.com/issues/49500 
     qa: "Assertion `cb_done' failed." 
 * https://tracker.ceph.com/issues/50258 (new) 
     pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" 
 * https://tracker.ceph.com/issues/49962 
     'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes 
 * https://tracker.ceph.com/issues/48203 (stock kernel update required) 
     qa: quota failure 
 * https://tracker.ceph.com/issues/50260 
     pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" 
 * https://tracker.ceph.com/issues/50016 
     qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"