Project

General

Profile

Pacific » History » Version 71

Venky Shankar, 09/15/2022 10:56 AM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 52 Venky Shankar
* Feb: Patrick
6 53 Venky Shankar
* Mar: Jeff
7
* Apr: Jos Collin
8
* May: Ramana
9
* Jun: Xiubo
10
* Jul: Rishabh
11
* Aug: Kotresh
12
* Sep: Venky
13
* Oct: Milind
14 38 Ramana Raja
15 1 Patrick Donnelly
h2. Reviews
16 40 Ramana Raja
17 71 Venky Shankar
h3. 2022 Sep 15
18
19
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
20
21
* https://tracker.ceph.com/issues/51282
22
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
23
* https://tracker.ceph.com/issues/52624
24
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
25
* https://tracker.ceph.com/issues/53360
26
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
27
28 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
29
30
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
31
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
32
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
33
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
34
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
35
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
36
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
37
38
39
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
40
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
41
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
42
43
44 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
45
46
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
47
* Most of the failures are passed in re-run. Please check rerun failures below.
48
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
49
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
50
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
51
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
52
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
53
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
54
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
55
56
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
57
* https://tracker.ceph.com/issues/52624
58
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
59
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
60
* https://tracker.ceph.com/issues/57083
61
* https://tracker.ceph.com/issues/53360
62
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
63
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
64
* https://tracker.ceph.com/issues/51183
65
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
66
* https://tracker.ceph.com/issues/56507
67
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
68
69 68 Kotresh Hiremath Ravishankar
70
h3. 2022 AUG 04
71
72
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
73
74
* https://tracker.ceph.com/issues/57087
75
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
76
* https://tracker.ceph.com/issues/52624
77
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
78
* https://tracker.ceph.com/issues/51267
79
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
80
* https://tracker.ceph.com/issues/53360
81
* https://tracker.ceph.com/issues/57083
82
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
83
* https://tracker.ceph.com/issues/56507
84
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
85
86
87 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
88
89
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
90
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
91
92
* https://tracker.ceph.com/issues/52624
93
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
94
* https://tracker.ceph.com/issues/57083
95
* https://tracker.ceph.com/issues/53360
96
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
97
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
98
* https://tracker.ceph.com/issues/51183
99
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
100
* https://tracker.ceph.com/issues/56507
101
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
102
103
104 66 Venky Shankar
h3. 2022 July 08
105
106
* https://tracker.ceph.com/issues/52624
107
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
108
* https://tracker.ceph.com/issues/53360
109
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
110
* https://tracker.ceph.com/issues/51183
111
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
112
* https://tracker.ceph.com/issues/56506
113
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
114
* https://tracker.ceph.com/issues/56507
115
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
116
117 65 Venky Shankar
h3. 2022 Jun 28
118
119
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
120
121
* https://tracker.ceph.com/issues/52624
122
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
123
* https://tracker.ceph.com/issues/53360
124
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
125
* https://tracker.ceph.com/issues/51183
126
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
127
128 64 Venky Shankar
h3. 2022 Jun 22
129
130
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
131
132
* https://tracker.ceph.com/issues/52624
133
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
134
* https://tracker.ceph.com/issues/53360
135
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
136
* https://tracker.ceph.com/issues/51183
137
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
138
139 63 Venky Shankar
h3. 2022 Jun 17
140
141
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
142
143
* https://tracker.ceph.com/issues/52624
144
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
145
* https://tracker.ceph.com/issues/53360
146
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
147
* https://tracker.ceph.com/issues/51183
148
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
149
150 62 Venky Shankar
h3. 2022 Jun 16
151
152
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
153
154
* https://tracker.ceph.com/issues/52624
155
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
156
* https://tracker.ceph.com/issues/53360
157
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
158
* https://tracker.ceph.com/issues/55449
159
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
160
* https://tracker.ceph.com/issues/51267
161
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
162
* https://tracker.ceph.com/issues/55332
163
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
164
165 61 Venky Shankar
h3. 2022 Jun 15
166
167
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
168
169
* https://tracker.ceph.com/issues/52624
170
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
171
* https://tracker.ceph.com/issues/53360
172
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
173
* https://tracker.ceph.com/issues/55449
174
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
175
176
177 60 Venky Shankar
h3. 2022 Jun 10
178
179
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
180
181
* https://tracker.ceph.com/issues/52624
182
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
183
* https://tracker.ceph.com/issues/53360
184
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
185
* https://tracker.ceph.com/issues/55449
186
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
187
188 59 Venky Shankar
h3. 2022 Jun 09
189
190
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
191
192
* https://tracker.ceph.com/issues/52624
193
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
194
* https://tracker.ceph.com/issues/53360
195
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
196
* https://tracker.ceph.com/issues/55449
197
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
198
199 58 Venky Shankar
h3. 2022 May 06
200
201
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
202
203
* https://tracker.ceph.com/issues/52624
204
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
205
* https://tracker.ceph.com/issues/53360
206
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
207
208 57 Venky Shankar
h3. 2022 April 18
209
210
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
211
(only mgr/snap_schedule backport pr)
212
213
* https://tracker.ceph.com/issues/52624
214
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
215 56 Venky Shankar
216
h3. 2022 March 28
217
218
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
219
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
220
221
* https://tracker.ceph.com/issues/52624
222
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
223
* https://tracker.ceph.com/issues/53360
224
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
225
* https://tracker.ceph.com/issues/54411
226
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
227
	33 daemons have recently crashed" during suites/fsstress.sh
228
229 55 Venky Shankar
h3. 2022 March 25
230
231
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
232
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
233
234
* https://tracker.ceph.com/issues/52624
235
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
236
* https://tracker.ceph.com/issues/52606
237
        qa: test_dirfrag_limit
238
* https://tracker.ceph.com/issues/51905
239
        qa: "error reading sessionmap 'mds1_sessionmap'"
240
* https://tracker.ceph.com/issues/53360
241
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
242
* https://tracker.ceph.com/issues/51183
243
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
244 47 Patrick Donnelly
245 54 Venky Shankar
h3. 2022 March 22
246
247
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
248
249
* https://tracker.ceph.com/issues/52624
250
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
251
* https://tracker.ceph.com/issues/52606
252
        qa: test_dirfrag_limit
253
* https://tracker.ceph.com/issues/51183
254
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
255
* https://tracker.ceph.com/issues/51905
256
        qa: "error reading sessionmap 'mds1_sessionmap'"
257
* https://tracker.ceph.com/issues/53360
258
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
259
* https://tracker.ceph.com/issues/54411
260
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
261
	33 daemons have recently crashed" during suites/fsstress.sh
262 47 Patrick Donnelly
263 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
264
265
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
266
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
267
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
268
269
270
* https://tracker.ceph.com/issues/53300
271
	qa: cluster [WRN] Scrub error on inode
272
* https://tracker.ceph.com/issues/53302
273
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
274
* https://tracker.ceph.com/issues/53314
275
	qa: fs/upgrade/mds_upgrade_sequence test timeout
276
* https://tracker.ceph.com/issues/53316
277
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
278
* https://tracker.ceph.com/issues/52624
279
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
280
* https://tracker.ceph.com/issues/52396
281
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
282
* https://tracker.ceph.com/issues/52875
283
	pacific: qa: test_dirfrag_limit
284
* https://tracker.ceph.com/issues/51705
285 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
286
* https://tracker.ceph.com/issues/39634
287
	qa: test_full_same_file timeout
288 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
289
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
290 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
291
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
292 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
293
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
294 48 Kotresh Hiremath Ravishankar
295 47 Patrick Donnelly
296
h3. 2021 November 20
297
298
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
299
300
* https://tracker.ceph.com/issues/53360
301
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
302 46 Kotresh Hiremath Ravishankar
303 41 Patrick Donnelly
h3. 2021 September 14 (QE)
304
305
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
306 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
307 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
308 41 Patrick Donnelly
309
* https://tracker.ceph.com/issues/52606
310
    qa: test_dirfrag_limit
311
* https://tracker.ceph.com/issues/52607
312
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
313 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
314
    qa: tasks.cephfs.fuse_mount:mount command failed
315 41 Patrick Donnelly
316 40 Ramana Raja
h3. 2021 Sep 7
317
318
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
319
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
320
321
322
* https://tracker.ceph.com/issues/52396
323
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
324
* https://tracker.ceph.com/issues/51705
325
    qa: tasks.cephfs.fuse_mount:mount command failed
326 4 Patrick Donnelly
327 34 Ramana Raja
h3. 2021 Aug 30
328
329
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
330
331
* https://tracker.ceph.com/issues/45434
332
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
333
* https://tracker.ceph.com/issues/51705
334
    qa: tasks.cephfs.fuse_mount:mount command failed
335
* https://tracker.ceph.com/issues/52396
336
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
337 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
338 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
339
* https://tracker.ceph.com/issues/51267
340 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
341
* https://tracker.ceph.com/issues/48772
342
   qa: pjd: not ok 9, 44, 80
343 35 Ramana Raja
344 34 Ramana Raja
345 31 Ramana Raja
h3. 2021 Aug 23
346
347
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
348
349
* https://tracker.ceph.com/issues/45434
350
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
351
* https://tracker.ceph.com/issues/51705
352
    qa: tasks.cephfs.fuse_mount:mount command failed
353 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
354 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
355
* https://tracker.ceph.com/issues/52397
356
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
357
358 31 Ramana Raja
359 29 Ramana Raja
h3. 2021 Aug 11
360
361
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
362
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
363
364
* https://tracker.ceph.com/issues/45434
365
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
366
* https://tracker.ceph.com/issues/51705
367
    qa: tasks.cephfs.fuse_mount:mount command failed
368 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
369
    osd: 5.2s0 deep-scrub : stat mismatch
370 29 Ramana Raja
371 19 Jos Collin
h3. 2021 July 15
372 20 Jos Collin
373 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
374
375 26 Jos Collin
* https://tracker.ceph.com/issues/45434
376
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
377
* https://tracker.ceph.com/issues/51705
378
    qa: tasks.cephfs.fuse_mount:mount command failed
379 27 Jos Collin
* https://tracker.ceph.com/issues/51183
380
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
381 28 Jos Collin
* https://tracker.ceph.com/issues/50528
382
    qa: fs:thrash: pjd suite not ok 80
383
* https://tracker.ceph.com/issues/51706
384
    qa: osd deep-scrub stat mismatch
385 26 Jos Collin
386 19 Jos Collin
h3. 2021 July 13
387 20 Jos Collin
388 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
389
390 21 Jos Collin
* https://tracker.ceph.com/issues/51704
391
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
392 23 Jos Collin
* https://tracker.ceph.com/issues/45434
393
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
394 24 Jos Collin
* https://tracker.ceph.com/issues/51705
395 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
396
* https://tracker.ceph.com/issues/48640
397 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
398 21 Jos Collin
399 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
400
401
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
402
403
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
404
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
405
406
* https://tracker.ceph.com/issues/45434
407
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
408
* https://tracker.ceph.com/issues/50260
409
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
410
* https://tracker.ceph.com/issues/51183
411
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
412
413
414 16 Jeff Layton
h3. 2021 June 28
415
416
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
417
418
* https://tracker.ceph.com/issues/45434
419
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
420
* https://tracker.ceph.com/issues/51440
421 17 Jeff Layton
    fallocate fails with EACCES
422 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
423
    TestVolumeClient failure
424
* https://tracker.ceph.com/issues/51266
425
    test cleanup failure
426
* https://tracker.ceph.com/issues/51183
427
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
428
429 12 Jeff Layton
h3. 2021 June 14
430
431
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
432
 
433
* https://tracker.ceph.com/issues/45434
434
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
435
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
436
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
437
* https://tracker.ceph.com/issues/51263
438
    pjdfstest rename test 10.t failed with EACCES
439
* https://tracker.ceph.com/issues/51264
440
    TestVolumeClient failure
441 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
442
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
443 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
444
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
445 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
446
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
447 11 Patrick Donnelly
448
h3. 2021 June 07 (Integration Branch)
449
450
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
451
452
* https://tracker.ceph.com/issues/45434
453
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
454
* https://tracker.ceph.com/issues/50279
455
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
456
* https://tracker.ceph.com/issues/48773
457
    qa: scrub does not complete
458
* https://tracker.ceph.com/issues/51170
459
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
460
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
461
    qa: quota failure
462
463
464 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
465
466
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
467
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
468
469
* https://tracker.ceph.com/issues/45434
470
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
471
* https://tracker.ceph.com/issues/50258
472
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
473
* https://tracker.ceph.com/issues/50260
474
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
475
* https://tracker.ceph.com/issues/49962
476
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
477
* https://tracker.ceph.com/issues/50016
478
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
479
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
480
    qa: quota failure
481
* https://tracker.ceph.com/issues/50528
482
    pacific: qa: fs:thrash: pjd suite not ok 20
483
484
485 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
486
487
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
488
489
* https://tracker.ceph.com/issues/50527
490
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
491
* https://tracker.ceph.com/issues/50528
492
    pacific: qa: fs:thrash: pjd suite not ok 20
493
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
494
    qa: "Assertion `cb_done' failed."
495
* https://tracker.ceph.com/issues/49500
496
    qa: "Assertion `cb_done' failed."
497
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
498
    qa: quota failure
499
* https://tracker.ceph.com/issues/45434
500
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
501
* https://tracker.ceph.com/issues/50279
502
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
503
* https://tracker.ceph.com/issues/50258
504
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
505
* https://tracker.ceph.com/issues/49962
506
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
507
* https://tracker.ceph.com/issues/49962
508
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
509
* https://tracker.ceph.com/issues/50530
510
    pacific: client: abort after MDS blocklist
511
512 2 Patrick Donnelly
513
h3. 2021 Apr 21 (Integration Branch)
514
515
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
516
517
* https://tracker.ceph.com/issues/45434
518
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
519
* https://tracker.ceph.com/issues/50250
520
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
521
* https://tracker.ceph.com/issues/50258
522
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
523
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
524
    qa: quota failure
525
* https://tracker.ceph.com/issues/50016
526
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
527
* https://tracker.ceph.com/issues/50495
528
    pacific: client: shutdown race fails with status 141
529
530
531 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
532
533
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
534
535
* https://tracker.ceph.com/issues/45434
536
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
537
* https://tracker.ceph.com/issues/48805
538
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
539
* https://tracker.ceph.com/issues/49500
540
    qa: "Assertion `cb_done' failed."
541
* https://tracker.ceph.com/issues/50258 (new)
542
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
543
* https://tracker.ceph.com/issues/49962
544
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
545
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
546
    qa: quota failure
547
* https://tracker.ceph.com/issues/50260
548
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
549
* https://tracker.ceph.com/issues/50016
550
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"