Pacific » History » Revision 68
Revision 67 (Kotresh Hiremath Ravishankar, 08/10/2022 07:46 AM) → Revision 68/134 (Kotresh Hiremath Ravishankar, 08/10/2022 10:49 AM)
h1. Pacific h2. On-call Schedule * Feb: Patrick * Mar: Jeff * Apr: Jos Collin * May: Ramana * Jun: Xiubo * Jul: Rishabh * Aug: Kotresh * Sep: Venky * Oct: Milind h2. Reviews h3. 2022 AUG 04 https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/57087 test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) * https://tracker.ceph.com/issues/52624 cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log * https://tracker.ceph.com/issues/51267 tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status * https://tracker.ceph.com/issues/53360 * https://tracker.ceph.com/issues/57083 qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client' * https://tracker.ceph.com/issues/56507 Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) h3. 2022 July 15 https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/57083 * https://tracker.ceph.com/issues/53360 tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) * https://tracker.ceph.com/issues/56507 pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) h3. 2022 July 08 * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) * https://tracker.ceph.com/issues/56506 pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan) * https://tracker.ceph.com/issues/56507 pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) h3. 2022 Jun 28 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) h3. 2022 Jun 22 https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) h3. 2022 Jun 17 https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) h3. 2022 Jun 16 https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/55449 pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh * https://tracker.ceph.com/issues/51267 CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... * https://tracker.ceph.com/issues/55332 Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) h3. 2022 Jun 15 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/55449 pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh h3. 2022 Jun 10 https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/55449 pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh h3. 2022 Jun 09 https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/55449 pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh h3. 2022 May 06 https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" h3. 2022 April 18 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific (only mgr/snap_schedule backport pr) * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" h3. 2022 March 28 http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/ http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/54411 mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh h3. 2022 March 25 https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/ https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/52606 qa: test_dirfrag_limit * https://tracker.ceph.com/issues/51905 qa: "error reading sessionmap 'mds1_sessionmap'" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) h3. 2022 March 22 https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/ * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/52606 qa: test_dirfrag_limit * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) * https://tracker.ceph.com/issues/51905 qa: "error reading sessionmap 'mds1_sessionmap'" * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" * https://tracker.ceph.com/issues/54411 mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh h3. 2021 November 22 http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi * https://tracker.ceph.com/issues/53300 qa: cluster [WRN] Scrub error on inode * https://tracker.ceph.com/issues/53302 qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1 * https://tracker.ceph.com/issues/53314 qa: fs/upgrade/mds_upgrade_sequence test timeout * https://tracker.ceph.com/issues/53316 qa: (smithi150) slow request osd_op, currently waiting for sub ops warning * https://tracker.ceph.com/issues/52624 qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/52396 pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) * https://tracker.ceph.com/issues/52875 pacific: qa: test_dirfrag_limit * https://tracker.ceph.com/issues/51705 pacific: qa: tasks.cephfs.fuse_mount:mount command failed * https://tracker.ceph.com/issues/39634 qa: test_full_same_file timeout * https://tracker.ceph.com/issues/49748 gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds * https://tracker.ceph.com/issues/51964 qa: test_cephfs_mirror_restart_sync_on_blocklist failure * https://tracker.ceph.com/issues/50223 qa: "client.4737 isn't responding to mclientcaps(revoke)" h3. 2021 November 20 https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific * https://tracker.ceph.com/issues/53360 pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" h3. 2021 September 14 (QE) https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/ http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/ https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/52606 qa: test_dirfrag_limit * https://tracker.ceph.com/issues/52607 qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed h3. 2021 Sep 7 https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/ https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/52396 qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed h3. 2021 Aug 30 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed * https://tracker.ceph.com/issues/52396 qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) * https://tracker.ceph.com/issues/52487 qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation) * https://tracker.ceph.com/issues/51267 Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) * https://tracker.ceph.com/issues/48772 qa: pjd: not ok 9, 44, 80 h3. 2021 Aug 23 https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed * https://tracker.ceph.com/issues/52396 qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) * https://tracker.ceph.com/issues/52397 qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed h3. 2021 Aug 11 https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/ https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed * https://tracker.ceph.com/issues/50222 osd: 5.2s0 deep-scrub : stat mismatch h3. 2021 July 15 https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) * https://tracker.ceph.com/issues/50528 qa: fs:thrash: pjd suite not ok 80 * https://tracker.ceph.com/issues/51706 qa: osd deep-scrub stat mismatch h3. 2021 July 13 https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/51704 Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth) * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/51705 qa: tasks.cephfs.fuse_mount:mount command failed * https://tracker.ceph.com/issues/48640 qa: snapshot mismatch during mds thrashing h3. 2021 June 29 (Integration Branch) https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/ Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065 Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/50260 pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" * https://tracker.ceph.com/issues/51183 qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' h3. 2021 June 28 https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/51440 fallocate fails with EACCES * https://tracker.ceph.com/issues/51264 TestVolumeClient failure * https://tracker.ceph.com/issues/51266 test cleanup failure * https://tracker.ceph.com/issues/51183 Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) h3. 2021 June 14 https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://bugzilla.redhat.com/show_bug.cgi?id=1973276 Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com * https://tracker.ceph.com/issues/51263 pjdfstest rename test 10.t failed with EACCES * https://tracker.ceph.com/issues/51264 TestVolumeClient failure * https://tracker.ceph.com/issues/51266 Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' * https://tracker.ceph.com/issues/50279 qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" * https://tracker.ceph.com/issues/51267 Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1 h3. 2021 June 07 (Integration Branch) http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/50279 qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" * https://tracker.ceph.com/issues/48773 qa: scrub does not complete * https://tracker.ceph.com/issues/51170 pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split' * https://tracker.ceph.com/issues/48203 (stock kernel update required) qa: quota failure h3. 2021 Apr 28 (QE pre-release) https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/ https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/50258 pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" * https://tracker.ceph.com/issues/50260 pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" * https://tracker.ceph.com/issues/49962 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes * https://tracker.ceph.com/issues/50016 qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" * https://tracker.ceph.com/issues/48203 (stock kernel update required) qa: quota failure * https://tracker.ceph.com/issues/50528 pacific: qa: fs:thrash: pjd suite not ok 20 h3. 2021 Apr 22 (Integration Branch) https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/50527 pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse) * https://tracker.ceph.com/issues/50528 pacific: qa: fs:thrash: pjd suite not ok 20 * https://tracker.ceph.com/issues/49500 (fixed in another integration run) qa: "Assertion `cb_done' failed." * https://tracker.ceph.com/issues/49500 qa: "Assertion `cb_done' failed." * https://tracker.ceph.com/issues/48203 (stock kernel update required) qa: quota failure * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/50279 qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" * https://tracker.ceph.com/issues/50258 pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" * https://tracker.ceph.com/issues/49962 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes * https://tracker.ceph.com/issues/49962 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes * https://tracker.ceph.com/issues/50530 pacific: client: abort after MDS blocklist h3. 2021 Apr 21 (Integration Branch) https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/50250 mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" * https://tracker.ceph.com/issues/50258 pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" * https://tracker.ceph.com/issues/48203 (stock kernel update required) qa: quota failure * https://tracker.ceph.com/issues/50016 qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" * https://tracker.ceph.com/issues/50495 pacific: client: shutdown race fails with status 141 h3. 2021 Apr 07 (Integration Branch) https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/ * https://tracker.ceph.com/issues/45434 qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * https://tracker.ceph.com/issues/48805 mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" * https://tracker.ceph.com/issues/49500 qa: "Assertion `cb_done' failed." * https://tracker.ceph.com/issues/50258 (new) pacific: qa: "run() got an unexpected keyword argument 'stdin_data'" * https://tracker.ceph.com/issues/49962 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes * https://tracker.ceph.com/issues/48203 (stock kernel update required) qa: quota failure * https://tracker.ceph.com/issues/50260 pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty" * https://tracker.ceph.com/issues/50016 qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"