Project

General

Profile

Pacific » History » Version 116

Jos Collin, 09/13/2023 01:40 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7
* Sep: Jos
8
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 114 Jos Collin
h3. 2023 September 12
16
17
* https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
18 115 Jos Collin
19
  2 test failures found:
20
  - https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi/7374440
21
  - https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi/7374563
22 116 Jos Collin
  Trackers are already created for the test failures.
23 114 Jos Collin
24 111 Patrick Donnelly
h3. 2023 August 31
25
26 113 Patrick Donnelly
https://github.com/ceph/ceph/pull/53189
27
https://github.com/ceph/ceph/pull/53243
28
https://github.com/ceph/ceph/pull/53185
29
https://github.com/ceph/ceph/pull/52744
30
https://github.com/ceph/ceph/pull/51045
31
32 111 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/
33
34 112 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
35
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
36
* https://tracker.ceph.com/issues/52624
37
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
38
* https://tracker.ceph.com/issues/54462
39
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
40
* https://tracker.ceph.com/issues/50222
41
    osd: 5.2s0 deep-scrub : stat mismatch
42
* https://tracker.ceph.com/issues/50250
43
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
44
45
Some spurious infrastructure / valgrind noise during cleanup.
46 111 Patrick Donnelly
47 107 Patrick Donnelly
h3. 2023 August 22
48
49
Pacific v16.2.14 QA
50
51
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
52
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
53
54
* https://tracker.ceph.com/issues/62578
55
    mon: osd pg-upmap-items command causes PG_DEGRADED warnings
56 109 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
57
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
58 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
59
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
60 107 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
61
    test_acls (tasks.cephfs.test_acls.TestACLs)
62 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62579
63
    client: evicted warning because client completes unmount before thrashed MDS comes back
64 110 Patrick Donnelly
* https://tracker.ceph.com/issues/62580
65
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
66 107 Patrick Donnelly
67 106 Patrick Donnelly
h3. 2023 August 16-2
68
69
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
70
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
71
72
* https://tracker.ceph.com/issues/52624
73
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
74
* https://tracker.ceph.com/issues/58992
75
    test_acls (tasks.cephfs.test_acls.TestACLs)
76
* https://tracker.ceph.com/issues/62501
77
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
78
79 105 Patrick Donnelly
h3. 2023 August 16
80
81
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
82
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
83
84
* https://tracker.ceph.com/issues/62499
85
    testing (?): deadlock ffsb task
86
* https://tracker.ceph.com/issues/62501
87
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
88
89 103 Patrick Donnelly
h3. 2023 August 11
90
91
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
92
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
93
94
Some infra noise causes dead job.
95
96
* https://tracker.ceph.com/issues/52624
97
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
98
* https://tracker.ceph.com/issues/58992
99
    test_acls (tasks.cephfs.test_acls.TestACLs)
100 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58340
101 107 Patrick Donnelly
    fsstress.sh failed with errno 124 
102 103 Patrick Donnelly
* https://tracker.ceph.com/issues/48773
103 107 Patrick Donnelly
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
104 103 Patrick Donnelly
* https://tracker.ceph.com/issues/50527
105
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
106
107 100 Venky Shankar
h3. 2023 August 8
108
109
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
110
111
* https://tracker.ceph.com/issues/52624
112
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
113 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
114
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
115 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
116
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
117
* https://tracker.ceph.com/issues/51964
118
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
119
* https://tracker.ceph.com/issues/58992
120
    test_acls (tasks.cephfs.test_acls.TestACLs)
121
* https://tracker.ceph.com/issues/62465
122
    pacific (?): LibCephFS.ShutdownRace segmentation fault
123 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
124 100 Venky Shankar
125 104 Patrick Donnelly
h3. 2023 August 03
126
127
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
128
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
129
130
* https://tracker.ceph.com/issues/52624
131
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
132
* https://tracker.ceph.com/issues/58992
133
    test_acls (tasks.cephfs.test_acls.TestACLs)
134
135 99 Venky Shankar
h3. 2023 July 25
136
137
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
138
139
* https://tracker.ceph.com/issues/52624
140
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
141
* https://tracker.ceph.com/issues/58992
142
    test_acls (tasks.cephfs.test_acls.TestACLs)
143
* https://tracker.ceph.com/issues/50223
144
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
145
* https://tracker.ceph.com/issues/62160
146
    mds: MDS abort because newly corrupt dentry to be committed
147
* https://tracker.ceph.com/issues/61201
148
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
149
150
151 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
152
153
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
154 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
155 94 Kotresh Hiremath Ravishankar
156
* https://tracker.ceph.com/issues/52624
157
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
158 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
159 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
160
* https://tracker.ceph.com/issues/58340
161
  fsstress.sh failed with errno 124 
162 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
163 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
164
* https://tracker.ceph.com/issues/58992
165
  test_acls (tasks.cephfs.test_acls.TestACLs)
166
* https://tracker.ceph.com/issues/54462
167 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
168 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
169 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
170 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
171 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
172
 
173 94 Kotresh Hiremath Ravishankar
174
175 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
176
177
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
178
179
* https://tracker.ceph.com/issues/52624
180
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
181
* https://tracker.ceph.com/issues/51964
182
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
183
* https://tracker.ceph.com/issues/48773
184
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
185
* https://tracker.ceph.com/issues/50223
186
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
187 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
188
  test_acls (tasks.cephfs.test_acls.TestACLs)
189 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
190
  fsstress.sh failed with errno 124
191 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
192
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
193
* https://tracker.ceph.com/issues/55446
194
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
195 91 Kotresh Hiremath Ravishankar
196 90 Rishabh Dave
h3. 2023 May 4
197
198
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
199
200
* https://tracker.ceph.com/issues/59560
201
  qa: RuntimeError: more than one file system available
202
* https://tracker.ceph.com/issues/59626
203
  FSMissing: File system xxxx does not exist in the map
204
* https://tracker.ceph.com/issues/58340
205
  fsstress.sh failed with errno 124
206
* https://tracker.ceph.com/issues/58992
207
  test_acls
208
* https://tracker.ceph.com/issues/48773
209
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
210
* https://tracker.ceph.com/issues/57676
211
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
212
213
214 86 Rishabh Dave
h3. 2023 Apr 13
215
216
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
217
218
https://tracker.ceph.com/issues/52624
219
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
220
* https://tracker.ceph.com/issues/57594
221
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
222
* https://tracker.ceph.com/issues/54108
223
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
224
* https://tracker.ceph.com/issues/58340
225
  fsstress.sh failed with errno 125
226
* https://tracker.ceph.com/issues/54462
227
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
228
* https://tracker.ceph.com/issues/49287     
229
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
230
* https://tracker.ceph.com/issues/58726
231
  test_acls: expected a yum based or a apt based system
232 40 Ramana Raja
233 84 Venky Shankar
h3. 2022 Dec 07
234
235
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
236
237
many transient git.ceph.com related timeouts
238
239
* https://tracker.ceph.com/issues/52624
240
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
241
* https://tracker.ceph.com/issues/50224
242
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
243
* https://tracker.ceph.com/issues/56644
244
    qa: test_rapid_creation fails with "No space left on device"
245
* https://tracker.ceph.com/issues/58221
246
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
247
248 83 Venky Shankar
h3. 2022 Dec 02
249
250
many transient git.ceph.com related timeouts
251
many transient 'Failed to connect to the host via ssh' failures
252
253
* https://tracker.ceph.com/issues/57723
254
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
255
* https://tracker.ceph.com/issues/52624
256
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
257
258 82 Venky Shankar
h3. 2022 Dec 01
259
260
many transient git.ceph.com related timeouts
261
262
* https://tracker.ceph.com/issues/57723
263
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
264
* https://tracker.ceph.com/issues/52624
265
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
266
267 80 Patrick Donnelly
h3. 2022 Nov 18
268
269
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
270
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
271
272
2 ansible dead failures.
273
12 transient git.ceph.com related timeouts
274
275
* https://tracker.ceph.com/issues/57723
276
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
277
* https://tracker.ceph.com/issues/52624
278
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
279
280 78 Milind Changire
h3. 2022 Oct 19
281
282
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
283
284
* https://tracker.ceph.com/issues/57723
285
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
286
* https://tracker.ceph.com/issues/52624
287
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
288
* https://tracker.ceph.com/issues/56644
289
  qa: test_rapid_creation fails with "No space left on device"
290
* https://tracker.ceph.com/issues/54460
291
  snaptest-multiple-capsnaps.sh test failure
292
* https://tracker.ceph.com/issues/57892
293
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
294
295
296
297 76 Venky Shankar
h3. 2022 Oct 06
298
299
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
300
301
* https://tracker.ceph.com/issues/52624
302
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
303
* https://tracker.ceph.com/issues/50223
304
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
305 77 Milind Changire
* https://tracker.ceph.com/issues/56507
306
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
307 76 Venky Shankar
308 75 Venky Shankar
h3. 2022 Sep 27
309
310
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
311
312
* https://tracker.ceph.com/issues/52624
313
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
314
* https://tracker.ceph.com/issues/48773
315
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
316
* https://tracker.ceph.com/issues/50224
317
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
318
* https://tracker.ceph.com/issues/56507
319
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
320
* https://tracker.ceph.com/issues/50223
321
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
322
323 74 Venky Shankar
h3. 2022 Sep 22
324
325
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
326
327
* https://tracker.ceph.com/issues/52624
328
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
329
* https://tracker.ceph.com/issues/51282
330
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
331
* https://tracker.ceph.com/issues/53360
332
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
333
334 73 Venky Shankar
h3. 2022 Sep 19
335
336
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
337
338
* https://tracker.ceph.com/issues/52624
339
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
340
* https://tracker.ceph.com/issues/57594
341
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
342
343 71 Venky Shankar
h3. 2022 Sep 15
344
345
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
346
347
* https://tracker.ceph.com/issues/51282
348
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
349
* https://tracker.ceph.com/issues/52624
350
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
351
* https://tracker.ceph.com/issues/53360
352
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
353 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
354
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
355 71 Venky Shankar
356 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
357
358
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
359
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
360
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
361
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
362
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
363
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
364
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
365
366
367
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
368
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
369
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
370
371
372 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
373
374
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
375
* Most of the failures are passed in re-run. Please check rerun failures below.
376
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
377
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
378
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
379
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
380
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
381
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
382
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
383
384
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
385
* https://tracker.ceph.com/issues/52624
386
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
387
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
388
* https://tracker.ceph.com/issues/57083
389
* https://tracker.ceph.com/issues/53360
390
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
391
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
392
* https://tracker.ceph.com/issues/51183
393
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
394
* https://tracker.ceph.com/issues/56507
395
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
396
397 68 Kotresh Hiremath Ravishankar
398
h3. 2022 AUG 04
399
400
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
401
402
* https://tracker.ceph.com/issues/57087
403
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
404
* https://tracker.ceph.com/issues/52624
405
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
406
* https://tracker.ceph.com/issues/51267
407
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
408
* https://tracker.ceph.com/issues/53360
409
* https://tracker.ceph.com/issues/57083
410
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
411
* https://tracker.ceph.com/issues/56507
412
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
413
414
415 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
416
417
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
418
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
419
420
* https://tracker.ceph.com/issues/52624
421
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
422
* https://tracker.ceph.com/issues/57083
423
* https://tracker.ceph.com/issues/53360
424
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
425
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
426
* https://tracker.ceph.com/issues/51183
427
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
428
* https://tracker.ceph.com/issues/56507
429
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
430
431
432 66 Venky Shankar
h3. 2022 July 08
433
434
* https://tracker.ceph.com/issues/52624
435
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
436
* https://tracker.ceph.com/issues/53360
437
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
438
* https://tracker.ceph.com/issues/51183
439
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
440
* https://tracker.ceph.com/issues/56506
441
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
442
* https://tracker.ceph.com/issues/56507
443
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
444
445 65 Venky Shankar
h3. 2022 Jun 28
446
447
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
448
449
* https://tracker.ceph.com/issues/52624
450
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
451
* https://tracker.ceph.com/issues/53360
452
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
453
* https://tracker.ceph.com/issues/51183
454
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
455
456 64 Venky Shankar
h3. 2022 Jun 22
457
458
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
459
460
* https://tracker.ceph.com/issues/52624
461
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
462
* https://tracker.ceph.com/issues/53360
463
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
464
* https://tracker.ceph.com/issues/51183
465
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
466
467 63 Venky Shankar
h3. 2022 Jun 17
468
469
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
470
471
* https://tracker.ceph.com/issues/52624
472
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
473
* https://tracker.ceph.com/issues/53360
474
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
475
* https://tracker.ceph.com/issues/51183
476
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
477
478 62 Venky Shankar
h3. 2022 Jun 16
479
480
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
481
482
* https://tracker.ceph.com/issues/52624
483
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
484
* https://tracker.ceph.com/issues/53360
485
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
486
* https://tracker.ceph.com/issues/55449
487
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
488
* https://tracker.ceph.com/issues/51267
489
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
490
* https://tracker.ceph.com/issues/55332
491
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
492
493 61 Venky Shankar
h3. 2022 Jun 15
494
495
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
496
497
* https://tracker.ceph.com/issues/52624
498
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
499
* https://tracker.ceph.com/issues/53360
500
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
501
* https://tracker.ceph.com/issues/55449
502
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
503
504
505 60 Venky Shankar
h3. 2022 Jun 10
506
507
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
508
509
* https://tracker.ceph.com/issues/52624
510
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
511
* https://tracker.ceph.com/issues/53360
512
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
513
* https://tracker.ceph.com/issues/55449
514
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
515
516 59 Venky Shankar
h3. 2022 Jun 09
517
518
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
519
520
* https://tracker.ceph.com/issues/52624
521
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
522
* https://tracker.ceph.com/issues/53360
523
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
524
* https://tracker.ceph.com/issues/55449
525
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
526
527 58 Venky Shankar
h3. 2022 May 06
528
529
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
530
531
* https://tracker.ceph.com/issues/52624
532
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
533
* https://tracker.ceph.com/issues/53360
534
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
535
536 57 Venky Shankar
h3. 2022 April 18
537
538
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
539
(only mgr/snap_schedule backport pr)
540
541
* https://tracker.ceph.com/issues/52624
542
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
543 56 Venky Shankar
544
h3. 2022 March 28
545
546
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
547
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
548
549
* https://tracker.ceph.com/issues/52624
550
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
551
* https://tracker.ceph.com/issues/53360
552
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
553
* https://tracker.ceph.com/issues/54411
554
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
555
	33 daemons have recently crashed" during suites/fsstress.sh
556
557 55 Venky Shankar
h3. 2022 March 25
558
559
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
560
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
561
562
* https://tracker.ceph.com/issues/52624
563
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
564
* https://tracker.ceph.com/issues/52606
565
        qa: test_dirfrag_limit
566
* https://tracker.ceph.com/issues/51905
567
        qa: "error reading sessionmap 'mds1_sessionmap'"
568
* https://tracker.ceph.com/issues/53360
569
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
570
* https://tracker.ceph.com/issues/51183
571
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
572 47 Patrick Donnelly
573 54 Venky Shankar
h3. 2022 March 22
574
575
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
576
577
* https://tracker.ceph.com/issues/52624
578
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
579
* https://tracker.ceph.com/issues/52606
580
        qa: test_dirfrag_limit
581
* https://tracker.ceph.com/issues/51183
582
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
583
* https://tracker.ceph.com/issues/51905
584
        qa: "error reading sessionmap 'mds1_sessionmap'"
585
* https://tracker.ceph.com/issues/53360
586
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
587
* https://tracker.ceph.com/issues/54411
588
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
589
	33 daemons have recently crashed" during suites/fsstress.sh
590 47 Patrick Donnelly
591 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
592
593
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
594
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
595
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
596
597
598
* https://tracker.ceph.com/issues/53300
599
	qa: cluster [WRN] Scrub error on inode
600
* https://tracker.ceph.com/issues/53302
601
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
602
* https://tracker.ceph.com/issues/53314
603
	qa: fs/upgrade/mds_upgrade_sequence test timeout
604
* https://tracker.ceph.com/issues/53316
605
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
606
* https://tracker.ceph.com/issues/52624
607
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
608
* https://tracker.ceph.com/issues/52396
609
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
610
* https://tracker.ceph.com/issues/52875
611
	pacific: qa: test_dirfrag_limit
612
* https://tracker.ceph.com/issues/51705
613 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
614
* https://tracker.ceph.com/issues/39634
615
	qa: test_full_same_file timeout
616 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
617
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
618 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
619
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
620 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
621
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
622 48 Kotresh Hiremath Ravishankar
623 47 Patrick Donnelly
624
h3. 2021 November 20
625
626
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
627
628
* https://tracker.ceph.com/issues/53360
629
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
630 46 Kotresh Hiremath Ravishankar
631 41 Patrick Donnelly
h3. 2021 September 14 (QE)
632
633
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
634 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
635 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
636 41 Patrick Donnelly
637
* https://tracker.ceph.com/issues/52606
638
    qa: test_dirfrag_limit
639
* https://tracker.ceph.com/issues/52607
640
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
641 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
642
    qa: tasks.cephfs.fuse_mount:mount command failed
643 41 Patrick Donnelly
644 40 Ramana Raja
h3. 2021 Sep 7
645
646
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
647
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
648
649
650
* https://tracker.ceph.com/issues/52396
651
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
652
* https://tracker.ceph.com/issues/51705
653
    qa: tasks.cephfs.fuse_mount:mount command failed
654 4 Patrick Donnelly
655 34 Ramana Raja
h3. 2021 Aug 30
656
657
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
658
659
* https://tracker.ceph.com/issues/45434
660
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
661
* https://tracker.ceph.com/issues/51705
662
    qa: tasks.cephfs.fuse_mount:mount command failed
663
* https://tracker.ceph.com/issues/52396
664
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
665 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
666 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
667
* https://tracker.ceph.com/issues/51267
668 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
669
* https://tracker.ceph.com/issues/48772
670
   qa: pjd: not ok 9, 44, 80
671 35 Ramana Raja
672 34 Ramana Raja
673 31 Ramana Raja
h3. 2021 Aug 23
674
675
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
676
677
* https://tracker.ceph.com/issues/45434
678
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
679
* https://tracker.ceph.com/issues/51705
680
    qa: tasks.cephfs.fuse_mount:mount command failed
681 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
682 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
683
* https://tracker.ceph.com/issues/52397
684
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
685
686 31 Ramana Raja
687 29 Ramana Raja
h3. 2021 Aug 11
688
689
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
690
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
691
692
* https://tracker.ceph.com/issues/45434
693
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
694
* https://tracker.ceph.com/issues/51705
695
    qa: tasks.cephfs.fuse_mount:mount command failed
696 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
697
    osd: 5.2s0 deep-scrub : stat mismatch
698 29 Ramana Raja
699 19 Jos Collin
h3. 2021 July 15
700 20 Jos Collin
701 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
702
703 26 Jos Collin
* https://tracker.ceph.com/issues/45434
704
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
705
* https://tracker.ceph.com/issues/51705
706
    qa: tasks.cephfs.fuse_mount:mount command failed
707 27 Jos Collin
* https://tracker.ceph.com/issues/51183
708
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
709 28 Jos Collin
* https://tracker.ceph.com/issues/50528
710
    qa: fs:thrash: pjd suite not ok 80
711
* https://tracker.ceph.com/issues/51706
712
    qa: osd deep-scrub stat mismatch
713 26 Jos Collin
714 19 Jos Collin
h3. 2021 July 13
715 20 Jos Collin
716 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
717
718 21 Jos Collin
* https://tracker.ceph.com/issues/51704
719
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
720 23 Jos Collin
* https://tracker.ceph.com/issues/45434
721
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
722 24 Jos Collin
* https://tracker.ceph.com/issues/51705
723 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
724
* https://tracker.ceph.com/issues/48640
725 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
726 21 Jos Collin
727 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
728
729
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
730
731
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
732
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
733
734
* https://tracker.ceph.com/issues/45434
735
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
736
* https://tracker.ceph.com/issues/50260
737
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
738
* https://tracker.ceph.com/issues/51183
739
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
740
741
742 16 Jeff Layton
h3. 2021 June 28
743
744
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
745
746
* https://tracker.ceph.com/issues/45434
747
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
748
* https://tracker.ceph.com/issues/51440
749 17 Jeff Layton
    fallocate fails with EACCES
750 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
751
    TestVolumeClient failure
752
* https://tracker.ceph.com/issues/51266
753
    test cleanup failure
754
* https://tracker.ceph.com/issues/51183
755
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
756
757 12 Jeff Layton
h3. 2021 June 14
758
759
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
760
 
761
* https://tracker.ceph.com/issues/45434
762
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
763
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
764
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
765
* https://tracker.ceph.com/issues/51263
766
    pjdfstest rename test 10.t failed with EACCES
767
* https://tracker.ceph.com/issues/51264
768
    TestVolumeClient failure
769 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
770
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
771 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
772
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
773 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
774
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
775 11 Patrick Donnelly
776
h3. 2021 June 07 (Integration Branch)
777
778
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
779
780
* https://tracker.ceph.com/issues/45434
781
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
782
* https://tracker.ceph.com/issues/50279
783
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
784
* https://tracker.ceph.com/issues/48773
785
    qa: scrub does not complete
786
* https://tracker.ceph.com/issues/51170
787
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
788
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
789
    qa: quota failure
790
791
792 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
793
794
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
795
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
796
797
* https://tracker.ceph.com/issues/45434
798
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
799
* https://tracker.ceph.com/issues/50258
800
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
801
* https://tracker.ceph.com/issues/50260
802
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
803
* https://tracker.ceph.com/issues/49962
804
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
805
* https://tracker.ceph.com/issues/50016
806
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
807
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
808
    qa: quota failure
809
* https://tracker.ceph.com/issues/50528
810
    pacific: qa: fs:thrash: pjd suite not ok 20
811
812
813 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
814
815
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
816
817
* https://tracker.ceph.com/issues/50527
818
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
819
* https://tracker.ceph.com/issues/50528
820
    pacific: qa: fs:thrash: pjd suite not ok 20
821
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
822
    qa: "Assertion `cb_done' failed."
823
* https://tracker.ceph.com/issues/49500
824
    qa: "Assertion `cb_done' failed."
825
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
826
    qa: quota failure
827
* https://tracker.ceph.com/issues/45434
828
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
829
* https://tracker.ceph.com/issues/50279
830
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
831
* https://tracker.ceph.com/issues/50258
832
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
833
* https://tracker.ceph.com/issues/49962
834
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
835
* https://tracker.ceph.com/issues/49962
836
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
837
* https://tracker.ceph.com/issues/50530
838
    pacific: client: abort after MDS blocklist
839
840 2 Patrick Donnelly
841
h3. 2021 Apr 21 (Integration Branch)
842
843
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
844
845
* https://tracker.ceph.com/issues/45434
846
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
847
* https://tracker.ceph.com/issues/50250
848
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
849
* https://tracker.ceph.com/issues/50258
850
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
851
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
852
    qa: quota failure
853
* https://tracker.ceph.com/issues/50016
854
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
855
* https://tracker.ceph.com/issues/50495
856
    pacific: client: shutdown race fails with status 141
857
858
859 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
860
861
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
862
863
* https://tracker.ceph.com/issues/45434
864
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
865
* https://tracker.ceph.com/issues/48805
866
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
867
* https://tracker.ceph.com/issues/49500
868
    qa: "Assertion `cb_done' failed."
869
* https://tracker.ceph.com/issues/50258 (new)
870
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
871
* https://tracker.ceph.com/issues/49962
872
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
873
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
874
    qa: quota failure
875
* https://tracker.ceph.com/issues/50260
876
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
877
* https://tracker.ceph.com/issues/50016
878
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"