Main » History » Revision 26
Revision 25 (Patrick Donnelly, 10/06/2021 12:09 AM) → Revision 26/272 (Patrick Donnelly, 10/15/2021 03:10 PM)
h3. 2021 October 12
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
* https://tracker.ceph.com/issues/52948
osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/50224
qa: test_mirroring_init_failure_with_recovery failure
* https://tracker.ceph.com/issues/52949
RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
h3. 2021 October 02
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
test_simple failures caused by PR in this set.
A few reruns because of QA infra noise.
* https://tracker.ceph.com/issues/52822
qa: failed pacific install on fs:upgrade
* https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
h3. 2021 September 20
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
* https://tracker.ceph.com/issues/52677
qa: test_simple failure
* https://tracker.ceph.com/issues/51279
kclient hangs on umount (testing branch)
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
* https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
* https://tracker.ceph.com/issues/52438
qa: ffsb timeout
h3. 2021 September 10
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
* https://tracker.ceph.com/issues/52624
qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
* https://tracker.ceph.com/issues/52625
qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
* https://tracker.ceph.com/issues/52439
qa: acls does not compile on centos stream
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/52626
mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
* https://tracker.ceph.com/issues/51279
kclient hangs on umount (testing branch)
h3. 2021 August 27
Several jobs died because of device failures.
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
* https://tracker.ceph.com/issues/52430
mds: fast async create client mount breaks racy test
* https://tracker.ceph.com/issues/52436
fs/ceph: "corrupt mdsmap"
* https://tracker.ceph.com/issues/52437
mds: InoTable::replay_release_ids abort via test_inotable_sync
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
* https://tracker.ceph.com/issues/52438
qa: ffsb timeout
* https://tracker.ceph.com/issues/52439
qa: acls does not compile on centos stream
h3. 2021 July 30
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/51975
pybind/mgr/stats: KeyError
h3. 2021 July 28
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
* https://tracker.ceph.com/issues/51905
qa: "error reading sessionmap 'mds1_sessionmap'"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
* https://tracker.ceph.com/issues/51267
CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
* https://tracker.ceph.com/issues/51279
kclient hangs on umount (testing branch)
h3. 2021 July 16
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/48772
qa: pjd: not ok 9, 44, 80
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/51279
kclient hangs on umount (testing branch)
* https://tracker.ceph.com/issues/50824
qa: snaptest-git-ceph bus error
h3. 2021 July 04
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/39150
mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
* https://tracker.ceph.com/issues/48771
qa: iogen: workload fails to cause balancing
* https://tracker.ceph.com/issues/51279
kclient hangs on umount (testing branch)
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
h3. 2021 July 01
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
* https://tracker.ceph.com/issues/51197
qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
* https://tracker.ceph.com/issues/50866
osd: stat mismatch on objects
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
h3. 2021 June 26
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
* https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
* https://tracker.ceph.com/issues/51410
kclient: fails to finish reconnect during MDS thrashing (testing branch)
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
* https://tracker.ceph.com/issues/51169
qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
* https://tracker.ceph.com/issues/48772
qa: pjd: not ok 9, 44, 80
h3. 2021 June 21
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
* https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/48771
qa: iogen: workload fails to cause balancing
* https://tracker.ceph.com/issues/51169
qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
* https://tracker.ceph.com/issues/50495
libcephfs: shutdown race fails with status 141
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/50824
qa: snaptest-git-ceph bus error
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
h3. 2021 June 16
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/51169
qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
* https://tracker.ceph.com/issues/43216
MDSMonitor: removes MDS coming out of quorum election
* https://tracker.ceph.com/issues/51278
mds: "FAILED ceph_assert(!segments.empty())"
* https://tracker.ceph.com/issues/51279
kclient hangs on umount (testing branch)
* https://tracker.ceph.com/issues/51280
mds: "FAILED ceph_assert(r == 0 || r == -2)"
* https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
* https://tracker.ceph.com/issues/51281
qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/51076
"wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
* https://tracker.ceph.com/issues/51228
qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
* https://tracker.ceph.com/issues/51282
pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
h3. 2021 June 14
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
* https://tracker.ceph.com/issues/51169
qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
* https://tracker.ceph.com/issues/51228
qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/51182
pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
* https://tracker.ceph.com/issues/51229
qa: test_multi_snap_schedule list difference failure
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
h3. 2021 June 13
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
* https://tracker.ceph.com/issues/51169
qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/51182
pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
* https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
* https://tracker.ceph.com/issues/51197
qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
h3. 2021 June 11
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
* https://tracker.ceph.com/issues/51169
qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/48771
qa: iogen: workload fails to cause balancing
* https://tracker.ceph.com/issues/43216
MDSMonitor: removes MDS coming out of quorum election
* https://tracker.ceph.com/issues/51182
pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/51183
qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
* https://tracker.ceph.com/issues/51184
qa: fs:bugs does not specify distro
h3. 2021 June 03
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/50622 (regression)
msg: active_connections regression
* https://tracker.ceph.com/issues/49845#note-2 (regression)
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/43216
MDSMonitor: removes MDS coming out of quorum election
h3. 2021 May 18
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
Regression in testing kernel caused some failures. Ilya fixed those and rerun
looked better. Some odd new noise in the rerun relating to packaging and "No
module named 'tasks.ceph'".
* https://tracker.ceph.com/issues/50824
qa: snaptest-git-ceph bus error
* https://tracker.ceph.com/issues/50622 (regression)
msg: active_connections regression
* https://tracker.ceph.com/issues/49845#note-2 (regression)
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure
h3. 2021 May 18
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/45591
mgr: FAILED ceph_assert(daemon != nullptr)
* https://tracker.ceph.com/issues/50866
osd: stat mismatch on objects
* https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
* https://tracker.ceph.com/issues/50867
qa: fs:mirror: reduced data availability
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/50622 (regression)
msg: active_connections regression
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/50868
qa: "kern.log.gz already exists; not overwritten"
* https://tracker.ceph.com/issues/50870
qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
h3. 2021 May 11
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
* one class of failures caused by PR
* https://tracker.ceph.com/issues/48812
qa: test_scrub_pause_and_resume_with_abort failure
* https://tracker.ceph.com/issues/50390
mds: monclient: wait_auth_rotating timed out after 30
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/50224
qa: test_mirroring_init_failure_with_recovery failure
* https://tracker.ceph.com/issues/50622 (regression)
msg: active_connections regression
* https://tracker.ceph.com/issues/50825
qa: snaptest-git-ceph hang during mon thrashing v2
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/50823
qa: RuntimeError: timeout waiting for cluster to stabilize
h3. 2021 May 14
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
* https://tracker.ceph.com/issues/48812
qa: test_scrub_pause_and_resume_with_abort failure
* https://tracker.ceph.com/issues/50821
qa: untar_snap_rm failure during mds thrashing
* https://tracker.ceph.com/issues/50622 (regression)
msg: active_connections regression
* https://tracker.ceph.com/issues/50822
qa: testing kernel patch for client metrics causes mds abort
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/50823
qa: RuntimeError: timeout waiting for cluster to stabilize
* https://tracker.ceph.com/issues/50824
qa: snaptest-git-ceph bus error
* https://tracker.ceph.com/issues/50825
qa: snaptest-git-ceph hang during mon thrashing v2
* https://tracker.ceph.com/issues/50826
kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
h3. 2021 May 01
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/50281
qa: untar_snap_rm timeout
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/50390
mds: monclient: wait_auth_rotating timed out after 30
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/50622 (regression)
msg: active_connections regression
* https://tracker.ceph.com/issues/45591
mgr: FAILED ceph_assert(daemon != nullptr)
* https://tracker.ceph.com/issues/50221
qa: snaptest-git-ceph failure in git diff
* https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
h3. 2021 Apr 15
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
* https://tracker.ceph.com/issues/50281
qa: untar_snap_rm timeout
* https://tracker.ceph.com/issues/50220
qa: dbench workload timeout
* https://tracker.ceph.com/issues/50246
mds: failure replaying journal (EMetaBlob)
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
* https://tracker.ceph.com/issues/50222
osd: 5.2s0 deep-scrub : stat mismatch
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/49845
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/37808
osd: osdmap cache weak_refs assert during shutdown
* https://tracker.ceph.com/issues/50387
client: fs/snaps failure
* https://tracker.ceph.com/issues/50389
mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
* https://tracker.ceph.com/issues/50216
qa: "ls: cannot access 'lost+found': No such file or directory"
* https://tracker.ceph.com/issues/50390
mds: monclient: wait_auth_rotating timed out after 30
h3. 2021 Apr 08
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/50279
qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
* https://tracker.ceph.com/issues/50246
mds: failure replaying journal (EMetaBlob)
* https://tracker.ceph.com/issues/48365
qa: ffsb build failure on CentOS 8.2
* https://tracker.ceph.com/issues/50216
qa: "ls: cannot access 'lost+found': No such file or directory"
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/50280
cephadm: RuntimeError: uid/gid not found
* https://tracker.ceph.com/issues/50281
qa: untar_snap_rm timeout
h3. 2021 Apr 08
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
* https://tracker.ceph.com/issues/50246
mds: failure replaying journal (EMetaBlob)
* https://tracker.ceph.com/issues/50250
mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
h3. 2021 Apr 07
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
* https://tracker.ceph.com/issues/50215
qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
* https://tracker.ceph.com/issues/49466
qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
* https://tracker.ceph.com/issues/50216
qa: "ls: cannot access 'lost+found': No such file or directory"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/49845
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/50220
qa: dbench workload timeout
* https://tracker.ceph.com/issues/50221
qa: snaptest-git-ceph failure in git diff
* https://tracker.ceph.com/issues/50222
osd: 5.2s0 deep-scrub : stat mismatch
* https://tracker.ceph.com/issues/50223
qa: "client.4737 isn't responding to mclientcaps(revoke)"
* https://tracker.ceph.com/issues/50224
qa: test_mirroring_init_failure_with_recovery failure
h3. 2021 Apr 01
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
* https://tracker.ceph.com/issues/48772
qa: pjd: not ok 9, 44, 80
* https://tracker.ceph.com/issues/50177
osd: "stalled aio... buggy kernel or bad device?"
* https://tracker.ceph.com/issues/48771
qa: iogen: workload fails to cause balancing
* https://tracker.ceph.com/issues/49845
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/48805
mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/50178
qa: "TypeError: run() got an unexpected keyword argument 'shell'"
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
h3. 2021 Mar 24
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
* https://tracker.ceph.com/issues/49500
qa: "Assertion `cb_done' failed."
* https://tracker.ceph.com/issues/50019
qa: mount failure with cephadm "probably no MDS server is up?"
* https://tracker.ceph.com/issues/50020
qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/48805
mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/48772
qa: pjd: not ok 9, 44, 80
* https://tracker.ceph.com/issues/50021
qa: snaptest-git-ceph failure during mon thrashing
* https://tracker.ceph.com/issues/48771
qa: iogen: workload fails to cause balancing
* https://tracker.ceph.com/issues/50016
qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
* https://tracker.ceph.com/issues/49466
qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
h3. 2021 Mar 18
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
* https://tracker.ceph.com/issues/49466
qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/48805
mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/49845
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/49605
mgr: drops command on the floor
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
qa: quota failure
* https://tracker.ceph.com/issues/49928
client: items pinned in cache preventing unmount x2
h3. 2021 Mar 15
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
* https://tracker.ceph.com/issues/49842
qa: stuck pkg install
* https://tracker.ceph.com/issues/49466
qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
* https://tracker.ceph.com/issues/49822
test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
* https://tracker.ceph.com/issues/49240
terminate called after throwing an instance of 'std::bad_alloc'
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/49500
qa: "Assertion `cb_done' failed."
* https://tracker.ceph.com/issues/49843
qa: fs/snaps/snaptest-upchildrealms.sh failure
* https://tracker.ceph.com/issues/49845
qa: failed umount in test_volumes
* https://tracker.ceph.com/issues/48805
mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/49605
mgr: drops command on the floor
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
h3. 2021 Mar 09
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
* https://tracker.ceph.com/issues/49500
qa: "Assertion `cb_done' failed."
* https://tracker.ceph.com/issues/48805
mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
* https://tracker.ceph.com/issues/48773
qa: scrub does not complete
* https://tracker.ceph.com/issues/45434
qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
* https://tracker.ceph.com/issues/49240
terminate called after throwing an instance of 'std::bad_alloc'
* https://tracker.ceph.com/issues/49466
qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
* https://tracker.ceph.com/issues/49684
qa: fs:cephadm mount does not wait for mds to be created
* https://tracker.ceph.com/issues/48771
qa: iogen: workload fails to cause balancing