Project

General

Profile

Pacific » History » Version 69

Kotresh Hiremath Ravishankar, 08/16/2022 01:53 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 52 Venky Shankar
* Feb: Patrick
6 53 Venky Shankar
* Mar: Jeff
7
* Apr: Jos Collin
8
* May: Ramana
9
* Jun: Xiubo
10
* Jul: Rishabh
11
* Aug: Kotresh
12
* Sep: Venky
13
* Oct: Milind
14 38 Ramana Raja
15 1 Patrick Donnelly
h2. Reviews
16 40 Ramana Raja
17 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
18
19
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
20
* Most of the failures are passed in re-run. Please check rerun failures below.
21
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
22
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
23
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
24
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
25
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
26
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
27
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
28
29
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
30
* https://tracker.ceph.com/issues/52624
31
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
32
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
33
* https://tracker.ceph.com/issues/57083
34
* https://tracker.ceph.com/issues/53360
35
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
36
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
37
* https://tracker.ceph.com/issues/51183
38
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
39
* https://tracker.ceph.com/issues/56507
40
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
41
42 68 Kotresh Hiremath Ravishankar
43
h3. 2022 AUG 04
44
45
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
46
47
* https://tracker.ceph.com/issues/57087
48
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
49
* https://tracker.ceph.com/issues/52624
50
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
51
* https://tracker.ceph.com/issues/51267
52
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
53
* https://tracker.ceph.com/issues/53360
54
* https://tracker.ceph.com/issues/57083
55
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
56
* https://tracker.ceph.com/issues/56507
57
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
58
59
60 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
61
62
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
63
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
64
65
* https://tracker.ceph.com/issues/52624
66
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
67
* https://tracker.ceph.com/issues/57083
68
* https://tracker.ceph.com/issues/53360
69
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
70
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
71
* https://tracker.ceph.com/issues/51183
72
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
73
* https://tracker.ceph.com/issues/56507
74
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
75
76
77 66 Venky Shankar
h3. 2022 July 08
78
79
* https://tracker.ceph.com/issues/52624
80
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
81
* https://tracker.ceph.com/issues/53360
82
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
83
* https://tracker.ceph.com/issues/51183
84
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
85
* https://tracker.ceph.com/issues/56506
86
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
87
* https://tracker.ceph.com/issues/56507
88
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
89
90 65 Venky Shankar
h3. 2022 Jun 28
91
92
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
93
94
* https://tracker.ceph.com/issues/52624
95
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
96
* https://tracker.ceph.com/issues/53360
97
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
98
* https://tracker.ceph.com/issues/51183
99
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
100
101 64 Venky Shankar
h3. 2022 Jun 22
102
103
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
104
105
* https://tracker.ceph.com/issues/52624
106
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
107
* https://tracker.ceph.com/issues/53360
108
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
109
* https://tracker.ceph.com/issues/51183
110
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
111
112 63 Venky Shankar
h3. 2022 Jun 17
113
114
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
115
116
* https://tracker.ceph.com/issues/52624
117
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
118
* https://tracker.ceph.com/issues/53360
119
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
120
* https://tracker.ceph.com/issues/51183
121
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
122
123 62 Venky Shankar
h3. 2022 Jun 16
124
125
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
126
127
* https://tracker.ceph.com/issues/52624
128
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
129
* https://tracker.ceph.com/issues/53360
130
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
131
* https://tracker.ceph.com/issues/55449
132
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
133
* https://tracker.ceph.com/issues/51267
134
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
135
* https://tracker.ceph.com/issues/55332
136
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
137
138 61 Venky Shankar
h3. 2022 Jun 15
139
140
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
141
142
* https://tracker.ceph.com/issues/52624
143
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
144
* https://tracker.ceph.com/issues/53360
145
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
146
* https://tracker.ceph.com/issues/55449
147
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
148
149
150 60 Venky Shankar
h3. 2022 Jun 10
151
152
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
153
154
* https://tracker.ceph.com/issues/52624
155
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
156
* https://tracker.ceph.com/issues/53360
157
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
158
* https://tracker.ceph.com/issues/55449
159
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
160
161 59 Venky Shankar
h3. 2022 Jun 09
162
163
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
164
165
* https://tracker.ceph.com/issues/52624
166
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
167
* https://tracker.ceph.com/issues/53360
168
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
169
* https://tracker.ceph.com/issues/55449
170
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
171
172 58 Venky Shankar
h3. 2022 May 06
173
174
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
175
176
* https://tracker.ceph.com/issues/52624
177
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
178
* https://tracker.ceph.com/issues/53360
179
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
180
181 57 Venky Shankar
h3. 2022 April 18
182
183
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
184
(only mgr/snap_schedule backport pr)
185
186
* https://tracker.ceph.com/issues/52624
187
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
188 56 Venky Shankar
189
h3. 2022 March 28
190
191
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
192
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
193
194
* https://tracker.ceph.com/issues/52624
195
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
196
* https://tracker.ceph.com/issues/53360
197
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
198
* https://tracker.ceph.com/issues/54411
199
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
200
	33 daemons have recently crashed" during suites/fsstress.sh
201
202 55 Venky Shankar
h3. 2022 March 25
203
204
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
205
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
206
207
* https://tracker.ceph.com/issues/52624
208
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
209
* https://tracker.ceph.com/issues/52606
210
        qa: test_dirfrag_limit
211
* https://tracker.ceph.com/issues/51905
212
        qa: "error reading sessionmap 'mds1_sessionmap'"
213
* https://tracker.ceph.com/issues/53360
214
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
215
* https://tracker.ceph.com/issues/51183
216
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
217 47 Patrick Donnelly
218 54 Venky Shankar
h3. 2022 March 22
219
220
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
221
222
* https://tracker.ceph.com/issues/52624
223
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
224
* https://tracker.ceph.com/issues/52606
225
        qa: test_dirfrag_limit
226
* https://tracker.ceph.com/issues/51183
227
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
228
* https://tracker.ceph.com/issues/51905
229
        qa: "error reading sessionmap 'mds1_sessionmap'"
230
* https://tracker.ceph.com/issues/53360
231
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
232
* https://tracker.ceph.com/issues/54411
233
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
234
	33 daemons have recently crashed" during suites/fsstress.sh
235 47 Patrick Donnelly
236 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
237
238
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
239
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
240
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
241
242
243
* https://tracker.ceph.com/issues/53300
244
	qa: cluster [WRN] Scrub error on inode
245
* https://tracker.ceph.com/issues/53302
246
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
247
* https://tracker.ceph.com/issues/53314
248
	qa: fs/upgrade/mds_upgrade_sequence test timeout
249
* https://tracker.ceph.com/issues/53316
250
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
251
* https://tracker.ceph.com/issues/52624
252
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
253
* https://tracker.ceph.com/issues/52396
254
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
255
* https://tracker.ceph.com/issues/52875
256
	pacific: qa: test_dirfrag_limit
257
* https://tracker.ceph.com/issues/51705
258 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
259
* https://tracker.ceph.com/issues/39634
260
	qa: test_full_same_file timeout
261 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
262
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
263 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
264
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
265 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
266
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
267 48 Kotresh Hiremath Ravishankar
268 47 Patrick Donnelly
269
h3. 2021 November 20
270
271
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
272
273
* https://tracker.ceph.com/issues/53360
274
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
275 46 Kotresh Hiremath Ravishankar
276 41 Patrick Donnelly
h3. 2021 September 14 (QE)
277
278
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
279 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
280 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
281 41 Patrick Donnelly
282
* https://tracker.ceph.com/issues/52606
283
    qa: test_dirfrag_limit
284
* https://tracker.ceph.com/issues/52607
285
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
286 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
287
    qa: tasks.cephfs.fuse_mount:mount command failed
288 41 Patrick Donnelly
289 40 Ramana Raja
h3. 2021 Sep 7
290
291
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
292
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
293
294
295
* https://tracker.ceph.com/issues/52396
296
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
297
* https://tracker.ceph.com/issues/51705
298
    qa: tasks.cephfs.fuse_mount:mount command failed
299 4 Patrick Donnelly
300 34 Ramana Raja
h3. 2021 Aug 30
301
302
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
303
304
* https://tracker.ceph.com/issues/45434
305
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
306
* https://tracker.ceph.com/issues/51705
307
    qa: tasks.cephfs.fuse_mount:mount command failed
308
* https://tracker.ceph.com/issues/52396
309
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
310 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
311 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
312
* https://tracker.ceph.com/issues/51267
313 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
314
* https://tracker.ceph.com/issues/48772
315
   qa: pjd: not ok 9, 44, 80
316 35 Ramana Raja
317 34 Ramana Raja
318 31 Ramana Raja
h3. 2021 Aug 23
319
320
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
321
322
* https://tracker.ceph.com/issues/45434
323
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
324
* https://tracker.ceph.com/issues/51705
325
    qa: tasks.cephfs.fuse_mount:mount command failed
326 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
327 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
328
* https://tracker.ceph.com/issues/52397
329
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
330
331 31 Ramana Raja
332 29 Ramana Raja
h3. 2021 Aug 11
333
334
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
335
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
336
337
* https://tracker.ceph.com/issues/45434
338
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
339
* https://tracker.ceph.com/issues/51705
340
    qa: tasks.cephfs.fuse_mount:mount command failed
341 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
342
    osd: 5.2s0 deep-scrub : stat mismatch
343 29 Ramana Raja
344 19 Jos Collin
h3. 2021 July 15
345 20 Jos Collin
346 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
347
348 26 Jos Collin
* https://tracker.ceph.com/issues/45434
349
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
350
* https://tracker.ceph.com/issues/51705
351
    qa: tasks.cephfs.fuse_mount:mount command failed
352 27 Jos Collin
* https://tracker.ceph.com/issues/51183
353
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
354 28 Jos Collin
* https://tracker.ceph.com/issues/50528
355
    qa: fs:thrash: pjd suite not ok 80
356
* https://tracker.ceph.com/issues/51706
357
    qa: osd deep-scrub stat mismatch
358 26 Jos Collin
359 19 Jos Collin
h3. 2021 July 13
360 20 Jos Collin
361 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
362
363 21 Jos Collin
* https://tracker.ceph.com/issues/51704
364
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
365 23 Jos Collin
* https://tracker.ceph.com/issues/45434
366
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
367 24 Jos Collin
* https://tracker.ceph.com/issues/51705
368 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
369
* https://tracker.ceph.com/issues/48640
370 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
371 21 Jos Collin
372 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
373
374
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
375
376
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
377
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
378
379
* https://tracker.ceph.com/issues/45434
380
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
381
* https://tracker.ceph.com/issues/50260
382
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
383
* https://tracker.ceph.com/issues/51183
384
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
385
386
387 16 Jeff Layton
h3. 2021 June 28
388
389
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
390
391
* https://tracker.ceph.com/issues/45434
392
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
393
* https://tracker.ceph.com/issues/51440
394 17 Jeff Layton
    fallocate fails with EACCES
395 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
396
    TestVolumeClient failure
397
* https://tracker.ceph.com/issues/51266
398
    test cleanup failure
399
* https://tracker.ceph.com/issues/51183
400
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
401
402 12 Jeff Layton
h3. 2021 June 14
403
404
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
405
 
406
* https://tracker.ceph.com/issues/45434
407
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
408
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
409
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
410
* https://tracker.ceph.com/issues/51263
411
    pjdfstest rename test 10.t failed with EACCES
412
* https://tracker.ceph.com/issues/51264
413
    TestVolumeClient failure
414 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
415
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
416 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
417
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
418 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
419
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
420 11 Patrick Donnelly
421
h3. 2021 June 07 (Integration Branch)
422
423
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
424
425
* https://tracker.ceph.com/issues/45434
426
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
427
* https://tracker.ceph.com/issues/50279
428
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
429
* https://tracker.ceph.com/issues/48773
430
    qa: scrub does not complete
431
* https://tracker.ceph.com/issues/51170
432
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
433
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
434
    qa: quota failure
435
436
437 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
438
439
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
440
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
441
442
* https://tracker.ceph.com/issues/45434
443
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
444
* https://tracker.ceph.com/issues/50258
445
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
446
* https://tracker.ceph.com/issues/50260
447
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
448
* https://tracker.ceph.com/issues/49962
449
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
450
* https://tracker.ceph.com/issues/50016
451
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
452
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
453
    qa: quota failure
454
* https://tracker.ceph.com/issues/50528
455
    pacific: qa: fs:thrash: pjd suite not ok 20
456
457
458 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
459
460
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
461
462
* https://tracker.ceph.com/issues/50527
463
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
464
* https://tracker.ceph.com/issues/50528
465
    pacific: qa: fs:thrash: pjd suite not ok 20
466
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
467
    qa: "Assertion `cb_done' failed."
468
* https://tracker.ceph.com/issues/49500
469
    qa: "Assertion `cb_done' failed."
470
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
471
    qa: quota failure
472
* https://tracker.ceph.com/issues/45434
473
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
474
* https://tracker.ceph.com/issues/50279
475
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
476
* https://tracker.ceph.com/issues/50258
477
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
478
* https://tracker.ceph.com/issues/49962
479
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
480
* https://tracker.ceph.com/issues/49962
481
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
482
* https://tracker.ceph.com/issues/50530
483
    pacific: client: abort after MDS blocklist
484
485 2 Patrick Donnelly
486
h3. 2021 Apr 21 (Integration Branch)
487
488
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
489
490
* https://tracker.ceph.com/issues/45434
491
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
492
* https://tracker.ceph.com/issues/50250
493
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
494
* https://tracker.ceph.com/issues/50258
495
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
496
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
497
    qa: quota failure
498
* https://tracker.ceph.com/issues/50016
499
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
500
* https://tracker.ceph.com/issues/50495
501
    pacific: client: shutdown race fails with status 141
502
503
504 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
505
506
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
507
508
* https://tracker.ceph.com/issues/45434
509
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
510
* https://tracker.ceph.com/issues/48805
511
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
512
* https://tracker.ceph.com/issues/49500
513
    qa: "Assertion `cb_done' failed."
514
* https://tracker.ceph.com/issues/50258 (new)
515
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
516
* https://tracker.ceph.com/issues/49962
517
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
518
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
519
    qa: quota failure
520
* https://tracker.ceph.com/issues/50260
521
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
522
* https://tracker.ceph.com/issues/50016
523
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"