Project

General

Profile

Pacific » History » Version 105

Patrick Donnelly, 08/21/2023 03:32 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7
* Sep: Jos
8
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 105 Patrick Donnelly
h3. 2023 August 16
16
17
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
18
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
19
20
* https://tracker.ceph.com/issues/62499
21
    testing (?): deadlock ffsb task
22
* https://tracker.ceph.com/issues/62501
23
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
24
25
26
27 103 Patrick Donnelly
h3. 2023 August 11
28
29
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
30
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
31
32
Some infra noise causes dead job.
33
34
* https://tracker.ceph.com/issues/52624
35
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
36
* https://tracker.ceph.com/issues/58992
37
    test_acls (tasks.cephfs.test_acls.TestACLs)
38
* https://tracker.ceph.com/issues/58340
39
  fsstress.sh failed with errno 124 
40
* https://tracker.ceph.com/issues/48773
41
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
42
* https://tracker.ceph.com/issues/50527
43
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
44
45 100 Venky Shankar
h3. 2023 August 8
46
47
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
48
49
* https://tracker.ceph.com/issues/52624
50
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
51 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
52
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
53 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
54
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
55
* https://tracker.ceph.com/issues/51964
56
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
57
* https://tracker.ceph.com/issues/58992
58
    test_acls (tasks.cephfs.test_acls.TestACLs)
59
* https://tracker.ceph.com/issues/62465
60
    pacific (?): LibCephFS.ShutdownRace segmentation fault
61 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
62 100 Venky Shankar
63 104 Patrick Donnelly
h3. 2023 August 03
64
65
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
66
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
67
68
* https://tracker.ceph.com/issues/52624
69
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
70
* https://tracker.ceph.com/issues/58992
71
    test_acls (tasks.cephfs.test_acls.TestACLs)
72
73 99 Venky Shankar
h3. 2023 July 25
74
75
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
76
77
* https://tracker.ceph.com/issues/52624
78
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
79
* https://tracker.ceph.com/issues/58992
80
    test_acls (tasks.cephfs.test_acls.TestACLs)
81
* https://tracker.ceph.com/issues/50223
82
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
83
* https://tracker.ceph.com/issues/62160
84
    mds: MDS abort because newly corrupt dentry to be committed
85
* https://tracker.ceph.com/issues/61201
86
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
87
88
89 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
90
91
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
92 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
93 94 Kotresh Hiremath Ravishankar
94
* https://tracker.ceph.com/issues/52624
95
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
96 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
97 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
98
* https://tracker.ceph.com/issues/58340
99
  fsstress.sh failed with errno 124 
100 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
101 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
102
* https://tracker.ceph.com/issues/58992
103
  test_acls (tasks.cephfs.test_acls.TestACLs)
104
* https://tracker.ceph.com/issues/54462
105 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
106 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
107 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
108 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
109 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
110
 
111 94 Kotresh Hiremath Ravishankar
112
113 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
114
115
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
116
117
* https://tracker.ceph.com/issues/52624
118
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
119
* https://tracker.ceph.com/issues/51964
120
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
121
* https://tracker.ceph.com/issues/48773
122
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
123
* https://tracker.ceph.com/issues/50223
124
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
125 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
126
  test_acls (tasks.cephfs.test_acls.TestACLs)
127 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
128
  fsstress.sh failed with errno 124
129 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
130
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
131
* https://tracker.ceph.com/issues/55446
132
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
133 91 Kotresh Hiremath Ravishankar
134 90 Rishabh Dave
h3. 2023 May 4
135
136
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
137
138
* https://tracker.ceph.com/issues/59560
139
  qa: RuntimeError: more than one file system available
140
* https://tracker.ceph.com/issues/59626
141
  FSMissing: File system xxxx does not exist in the map
142
* https://tracker.ceph.com/issues/58340
143
  fsstress.sh failed with errno 124
144
* https://tracker.ceph.com/issues/58992
145
  test_acls
146
* https://tracker.ceph.com/issues/48773
147
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
148
* https://tracker.ceph.com/issues/57676
149
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
150
151
152 86 Rishabh Dave
h3. 2023 Apr 13
153
154
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
155
156
https://tracker.ceph.com/issues/52624
157
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
158
* https://tracker.ceph.com/issues/57594
159
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
160
* https://tracker.ceph.com/issues/54108
161
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
162
* https://tracker.ceph.com/issues/58340
163
  fsstress.sh failed with errno 125
164
* https://tracker.ceph.com/issues/54462
165
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
166
* https://tracker.ceph.com/issues/49287     
167
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
168
* https://tracker.ceph.com/issues/58726
169
  test_acls: expected a yum based or a apt based system
170 40 Ramana Raja
171 84 Venky Shankar
h3. 2022 Dec 07
172
173
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
174
175
many transient git.ceph.com related timeouts
176
177
* https://tracker.ceph.com/issues/52624
178
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
179
* https://tracker.ceph.com/issues/50224
180
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
181
* https://tracker.ceph.com/issues/56644
182
    qa: test_rapid_creation fails with "No space left on device"
183
* https://tracker.ceph.com/issues/58221
184
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
185
186 83 Venky Shankar
h3. 2022 Dec 02
187
188
many transient git.ceph.com related timeouts
189
many transient 'Failed to connect to the host via ssh' failures
190
191
* https://tracker.ceph.com/issues/57723
192
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
193
* https://tracker.ceph.com/issues/52624
194
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
195
196 82 Venky Shankar
h3. 2022 Dec 01
197
198
many transient git.ceph.com related timeouts
199
200
* https://tracker.ceph.com/issues/57723
201
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
202
* https://tracker.ceph.com/issues/52624
203
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
204
205 80 Patrick Donnelly
h3. 2022 Nov 18
206
207
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
208
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
209
210
2 ansible dead failures.
211
12 transient git.ceph.com related timeouts
212
213
* https://tracker.ceph.com/issues/57723
214
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
215
* https://tracker.ceph.com/issues/52624
216
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
217
218 78 Milind Changire
h3. 2022 Oct 19
219
220
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
221
222
* https://tracker.ceph.com/issues/57723
223
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
224
* https://tracker.ceph.com/issues/52624
225
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
226
* https://tracker.ceph.com/issues/56644
227
  qa: test_rapid_creation fails with "No space left on device"
228
* https://tracker.ceph.com/issues/54460
229
  snaptest-multiple-capsnaps.sh test failure
230
* https://tracker.ceph.com/issues/57892
231
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
232
233
234
235 76 Venky Shankar
h3. 2022 Oct 06
236
237
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
238
239
* https://tracker.ceph.com/issues/52624
240
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
241
* https://tracker.ceph.com/issues/50223
242
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
243 77 Milind Changire
* https://tracker.ceph.com/issues/56507
244
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
245 76 Venky Shankar
246 75 Venky Shankar
h3. 2022 Sep 27
247
248
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
249
250
* https://tracker.ceph.com/issues/52624
251
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
252
* https://tracker.ceph.com/issues/48773
253
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
254
* https://tracker.ceph.com/issues/50224
255
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
256
* https://tracker.ceph.com/issues/56507
257
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
258
* https://tracker.ceph.com/issues/50223
259
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
260
261 74 Venky Shankar
h3. 2022 Sep 22
262
263
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
264
265
* https://tracker.ceph.com/issues/52624
266
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
267
* https://tracker.ceph.com/issues/51282
268
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
269
* https://tracker.ceph.com/issues/53360
270
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
271
272 73 Venky Shankar
h3. 2022 Sep 19
273
274
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
275
276
* https://tracker.ceph.com/issues/52624
277
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
278
* https://tracker.ceph.com/issues/57594
279
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
280
281 71 Venky Shankar
h3. 2022 Sep 15
282
283
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
284
285
* https://tracker.ceph.com/issues/51282
286
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
287
* https://tracker.ceph.com/issues/52624
288
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
289
* https://tracker.ceph.com/issues/53360
290
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
291 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
292
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
293 71 Venky Shankar
294 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
295
296
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
297
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
298
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
299
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
300
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
301
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
302
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
303
304
305
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
306
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
307
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
308
309
310 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
311
312
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
313
* Most of the failures are passed in re-run. Please check rerun failures below.
314
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
315
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
316
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
317
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
318
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
319
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
320
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
321
322
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
323
* https://tracker.ceph.com/issues/52624
324
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
325
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
326
* https://tracker.ceph.com/issues/57083
327
* https://tracker.ceph.com/issues/53360
328
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
329
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
330
* https://tracker.ceph.com/issues/51183
331
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
332
* https://tracker.ceph.com/issues/56507
333
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
334
335 68 Kotresh Hiremath Ravishankar
336
h3. 2022 AUG 04
337
338
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
339
340
* https://tracker.ceph.com/issues/57087
341
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
342
* https://tracker.ceph.com/issues/52624
343
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
344
* https://tracker.ceph.com/issues/51267
345
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
346
* https://tracker.ceph.com/issues/53360
347
* https://tracker.ceph.com/issues/57083
348
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
349
* https://tracker.ceph.com/issues/56507
350
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
351
352
353 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
354
355
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
356
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
357
358
* https://tracker.ceph.com/issues/52624
359
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
360
* https://tracker.ceph.com/issues/57083
361
* https://tracker.ceph.com/issues/53360
362
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
363
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
364
* https://tracker.ceph.com/issues/51183
365
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
366
* https://tracker.ceph.com/issues/56507
367
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
368
369
370 66 Venky Shankar
h3. 2022 July 08
371
372
* https://tracker.ceph.com/issues/52624
373
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
374
* https://tracker.ceph.com/issues/53360
375
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
376
* https://tracker.ceph.com/issues/51183
377
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
378
* https://tracker.ceph.com/issues/56506
379
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
380
* https://tracker.ceph.com/issues/56507
381
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
382
383 65 Venky Shankar
h3. 2022 Jun 28
384
385
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
386
387
* https://tracker.ceph.com/issues/52624
388
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
389
* https://tracker.ceph.com/issues/53360
390
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
391
* https://tracker.ceph.com/issues/51183
392
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
393
394 64 Venky Shankar
h3. 2022 Jun 22
395
396
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
397
398
* https://tracker.ceph.com/issues/52624
399
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
400
* https://tracker.ceph.com/issues/53360
401
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
402
* https://tracker.ceph.com/issues/51183
403
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
404
405 63 Venky Shankar
h3. 2022 Jun 17
406
407
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
408
409
* https://tracker.ceph.com/issues/52624
410
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
411
* https://tracker.ceph.com/issues/53360
412
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
413
* https://tracker.ceph.com/issues/51183
414
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
415
416 62 Venky Shankar
h3. 2022 Jun 16
417
418
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
419
420
* https://tracker.ceph.com/issues/52624
421
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
422
* https://tracker.ceph.com/issues/53360
423
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
424
* https://tracker.ceph.com/issues/55449
425
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
426
* https://tracker.ceph.com/issues/51267
427
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
428
* https://tracker.ceph.com/issues/55332
429
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
430
431 61 Venky Shankar
h3. 2022 Jun 15
432
433
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
434
435
* https://tracker.ceph.com/issues/52624
436
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
437
* https://tracker.ceph.com/issues/53360
438
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
439
* https://tracker.ceph.com/issues/55449
440
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
441
442
443 60 Venky Shankar
h3. 2022 Jun 10
444
445
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
446
447
* https://tracker.ceph.com/issues/52624
448
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
449
* https://tracker.ceph.com/issues/53360
450
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
451
* https://tracker.ceph.com/issues/55449
452
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
453
454 59 Venky Shankar
h3. 2022 Jun 09
455
456
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
457
458
* https://tracker.ceph.com/issues/52624
459
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
460
* https://tracker.ceph.com/issues/53360
461
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
462
* https://tracker.ceph.com/issues/55449
463
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
464
465 58 Venky Shankar
h3. 2022 May 06
466
467
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
468
469
* https://tracker.ceph.com/issues/52624
470
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
471
* https://tracker.ceph.com/issues/53360
472
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
473
474 57 Venky Shankar
h3. 2022 April 18
475
476
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
477
(only mgr/snap_schedule backport pr)
478
479
* https://tracker.ceph.com/issues/52624
480
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
481 56 Venky Shankar
482
h3. 2022 March 28
483
484
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
485
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
486
487
* https://tracker.ceph.com/issues/52624
488
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
489
* https://tracker.ceph.com/issues/53360
490
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
491
* https://tracker.ceph.com/issues/54411
492
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
493
	33 daemons have recently crashed" during suites/fsstress.sh
494
495 55 Venky Shankar
h3. 2022 March 25
496
497
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
498
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
499
500
* https://tracker.ceph.com/issues/52624
501
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
502
* https://tracker.ceph.com/issues/52606
503
        qa: test_dirfrag_limit
504
* https://tracker.ceph.com/issues/51905
505
        qa: "error reading sessionmap 'mds1_sessionmap'"
506
* https://tracker.ceph.com/issues/53360
507
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
508
* https://tracker.ceph.com/issues/51183
509
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
510 47 Patrick Donnelly
511 54 Venky Shankar
h3. 2022 March 22
512
513
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
514
515
* https://tracker.ceph.com/issues/52624
516
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
517
* https://tracker.ceph.com/issues/52606
518
        qa: test_dirfrag_limit
519
* https://tracker.ceph.com/issues/51183
520
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
521
* https://tracker.ceph.com/issues/51905
522
        qa: "error reading sessionmap 'mds1_sessionmap'"
523
* https://tracker.ceph.com/issues/53360
524
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
525
* https://tracker.ceph.com/issues/54411
526
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
527
	33 daemons have recently crashed" during suites/fsstress.sh
528 47 Patrick Donnelly
529 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
530
531
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
532
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
533
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
534
535
536
* https://tracker.ceph.com/issues/53300
537
	qa: cluster [WRN] Scrub error on inode
538
* https://tracker.ceph.com/issues/53302
539
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
540
* https://tracker.ceph.com/issues/53314
541
	qa: fs/upgrade/mds_upgrade_sequence test timeout
542
* https://tracker.ceph.com/issues/53316
543
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
544
* https://tracker.ceph.com/issues/52624
545
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
546
* https://tracker.ceph.com/issues/52396
547
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
548
* https://tracker.ceph.com/issues/52875
549
	pacific: qa: test_dirfrag_limit
550
* https://tracker.ceph.com/issues/51705
551 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
552
* https://tracker.ceph.com/issues/39634
553
	qa: test_full_same_file timeout
554 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
555
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
556 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
557
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
558 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
559
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
560 48 Kotresh Hiremath Ravishankar
561 47 Patrick Donnelly
562
h3. 2021 November 20
563
564
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
565
566
* https://tracker.ceph.com/issues/53360
567
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
568 46 Kotresh Hiremath Ravishankar
569 41 Patrick Donnelly
h3. 2021 September 14 (QE)
570
571
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
572 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
573 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
574 41 Patrick Donnelly
575
* https://tracker.ceph.com/issues/52606
576
    qa: test_dirfrag_limit
577
* https://tracker.ceph.com/issues/52607
578
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
579 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
580
    qa: tasks.cephfs.fuse_mount:mount command failed
581 41 Patrick Donnelly
582 40 Ramana Raja
h3. 2021 Sep 7
583
584
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
585
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
586
587
588
* https://tracker.ceph.com/issues/52396
589
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
590
* https://tracker.ceph.com/issues/51705
591
    qa: tasks.cephfs.fuse_mount:mount command failed
592 4 Patrick Donnelly
593 34 Ramana Raja
h3. 2021 Aug 30
594
595
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
596
597
* https://tracker.ceph.com/issues/45434
598
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
599
* https://tracker.ceph.com/issues/51705
600
    qa: tasks.cephfs.fuse_mount:mount command failed
601
* https://tracker.ceph.com/issues/52396
602
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
603 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
604 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
605
* https://tracker.ceph.com/issues/51267
606 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
607
* https://tracker.ceph.com/issues/48772
608
   qa: pjd: not ok 9, 44, 80
609 35 Ramana Raja
610 34 Ramana Raja
611 31 Ramana Raja
h3. 2021 Aug 23
612
613
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
614
615
* https://tracker.ceph.com/issues/45434
616
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
617
* https://tracker.ceph.com/issues/51705
618
    qa: tasks.cephfs.fuse_mount:mount command failed
619 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
620 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
621
* https://tracker.ceph.com/issues/52397
622
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
623
624 31 Ramana Raja
625 29 Ramana Raja
h3. 2021 Aug 11
626
627
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
628
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
629
630
* https://tracker.ceph.com/issues/45434
631
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
632
* https://tracker.ceph.com/issues/51705
633
    qa: tasks.cephfs.fuse_mount:mount command failed
634 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
635
    osd: 5.2s0 deep-scrub : stat mismatch
636 29 Ramana Raja
637 19 Jos Collin
h3. 2021 July 15
638 20 Jos Collin
639 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
640
641 26 Jos Collin
* https://tracker.ceph.com/issues/45434
642
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
643
* https://tracker.ceph.com/issues/51705
644
    qa: tasks.cephfs.fuse_mount:mount command failed
645 27 Jos Collin
* https://tracker.ceph.com/issues/51183
646
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
647 28 Jos Collin
* https://tracker.ceph.com/issues/50528
648
    qa: fs:thrash: pjd suite not ok 80
649
* https://tracker.ceph.com/issues/51706
650
    qa: osd deep-scrub stat mismatch
651 26 Jos Collin
652 19 Jos Collin
h3. 2021 July 13
653 20 Jos Collin
654 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
655
656 21 Jos Collin
* https://tracker.ceph.com/issues/51704
657
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
658 23 Jos Collin
* https://tracker.ceph.com/issues/45434
659
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
660 24 Jos Collin
* https://tracker.ceph.com/issues/51705
661 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
662
* https://tracker.ceph.com/issues/48640
663 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
664 21 Jos Collin
665 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
666
667
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
668
669
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
670
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
671
672
* https://tracker.ceph.com/issues/45434
673
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
674
* https://tracker.ceph.com/issues/50260
675
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
676
* https://tracker.ceph.com/issues/51183
677
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
678
679
680 16 Jeff Layton
h3. 2021 June 28
681
682
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
683
684
* https://tracker.ceph.com/issues/45434
685
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
686
* https://tracker.ceph.com/issues/51440
687 17 Jeff Layton
    fallocate fails with EACCES
688 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
689
    TestVolumeClient failure
690
* https://tracker.ceph.com/issues/51266
691
    test cleanup failure
692
* https://tracker.ceph.com/issues/51183
693
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
694
695 12 Jeff Layton
h3. 2021 June 14
696
697
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
698
 
699
* https://tracker.ceph.com/issues/45434
700
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
701
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
702
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
703
* https://tracker.ceph.com/issues/51263
704
    pjdfstest rename test 10.t failed with EACCES
705
* https://tracker.ceph.com/issues/51264
706
    TestVolumeClient failure
707 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
708
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
709 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
710
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
711 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
712
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
713 11 Patrick Donnelly
714
h3. 2021 June 07 (Integration Branch)
715
716
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
717
718
* https://tracker.ceph.com/issues/45434
719
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
720
* https://tracker.ceph.com/issues/50279
721
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
722
* https://tracker.ceph.com/issues/48773
723
    qa: scrub does not complete
724
* https://tracker.ceph.com/issues/51170
725
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
726
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
727
    qa: quota failure
728
729
730 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
731
732
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
733
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
734
735
* https://tracker.ceph.com/issues/45434
736
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
737
* https://tracker.ceph.com/issues/50258
738
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
739
* https://tracker.ceph.com/issues/50260
740
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
741
* https://tracker.ceph.com/issues/49962
742
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
743
* https://tracker.ceph.com/issues/50016
744
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
745
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
746
    qa: quota failure
747
* https://tracker.ceph.com/issues/50528
748
    pacific: qa: fs:thrash: pjd suite not ok 20
749
750
751 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
752
753
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
754
755
* https://tracker.ceph.com/issues/50527
756
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
757
* https://tracker.ceph.com/issues/50528
758
    pacific: qa: fs:thrash: pjd suite not ok 20
759
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
760
    qa: "Assertion `cb_done' failed."
761
* https://tracker.ceph.com/issues/49500
762
    qa: "Assertion `cb_done' failed."
763
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
764
    qa: quota failure
765
* https://tracker.ceph.com/issues/45434
766
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
767
* https://tracker.ceph.com/issues/50279
768
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
769
* https://tracker.ceph.com/issues/50258
770
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
771
* https://tracker.ceph.com/issues/49962
772
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
773
* https://tracker.ceph.com/issues/49962
774
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
775
* https://tracker.ceph.com/issues/50530
776
    pacific: client: abort after MDS blocklist
777
778 2 Patrick Donnelly
779
h3. 2021 Apr 21 (Integration Branch)
780
781
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
782
783
* https://tracker.ceph.com/issues/45434
784
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
785
* https://tracker.ceph.com/issues/50250
786
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
787
* https://tracker.ceph.com/issues/50258
788
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
789
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
790
    qa: quota failure
791
* https://tracker.ceph.com/issues/50016
792
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
793
* https://tracker.ceph.com/issues/50495
794
    pacific: client: shutdown race fails with status 141
795
796
797 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
798
799
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
800
801
* https://tracker.ceph.com/issues/45434
802
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
803
* https://tracker.ceph.com/issues/48805
804
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
805
* https://tracker.ceph.com/issues/49500
806
    qa: "Assertion `cb_done' failed."
807
* https://tracker.ceph.com/issues/50258 (new)
808
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
809
* https://tracker.ceph.com/issues/49962
810
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
811
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
812
    qa: quota failure
813
* https://tracker.ceph.com/issues/50260
814
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
815
* https://tracker.ceph.com/issues/50016
816
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"