Project

General

Profile

Pacific » History » Version 119

Rishabh Dave, 11/08/2023 05:04 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7 118 Jos Collin
* Sep: Jos Collin
8 98 Venky Shankar
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 119 Rishabh Dave
h3. ADD NEW ENTRY BELOW
16
17 114 Jos Collin
h3. 2023 September 12
18
19 117 Jos Collin
https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
20 115 Jos Collin
21 117 Jos Collin
  * https://tracker.ceph.com/issues/58992
22
    test_acls (tasks.cephfs.test_acls.TestACLs)
23
  * https://tracker.ceph.com/issues/62580
24
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
25 114 Jos Collin
26 111 Patrick Donnelly
h3. 2023 August 31
27
28 113 Patrick Donnelly
https://github.com/ceph/ceph/pull/53189
29
https://github.com/ceph/ceph/pull/53243
30
https://github.com/ceph/ceph/pull/53185
31
https://github.com/ceph/ceph/pull/52744
32
https://github.com/ceph/ceph/pull/51045
33
34 111 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/
35
36 112 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
37
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
38
* https://tracker.ceph.com/issues/52624
39
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
40
* https://tracker.ceph.com/issues/54462
41
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
42
* https://tracker.ceph.com/issues/50222
43
    osd: 5.2s0 deep-scrub : stat mismatch
44
* https://tracker.ceph.com/issues/50250
45
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
46
47
Some spurious infrastructure / valgrind noise during cleanup.
48 111 Patrick Donnelly
49 107 Patrick Donnelly
h3. 2023 August 22
50
51
Pacific v16.2.14 QA
52
53
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
54
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
55
56
* https://tracker.ceph.com/issues/62578
57
    mon: osd pg-upmap-items command causes PG_DEGRADED warnings
58 109 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
59
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
60 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
61
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
62 107 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
63
    test_acls (tasks.cephfs.test_acls.TestACLs)
64 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62579
65
    client: evicted warning because client completes unmount before thrashed MDS comes back
66 110 Patrick Donnelly
* https://tracker.ceph.com/issues/62580
67
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
68 107 Patrick Donnelly
69 106 Patrick Donnelly
h3. 2023 August 16-2
70
71
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
72
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
73
74
* https://tracker.ceph.com/issues/52624
75
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
76
* https://tracker.ceph.com/issues/58992
77
    test_acls (tasks.cephfs.test_acls.TestACLs)
78
* https://tracker.ceph.com/issues/62501
79
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
80
81 105 Patrick Donnelly
h3. 2023 August 16
82
83
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
84
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
85
86
* https://tracker.ceph.com/issues/62499
87
    testing (?): deadlock ffsb task
88
* https://tracker.ceph.com/issues/62501
89
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
90
91 103 Patrick Donnelly
h3. 2023 August 11
92
93
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
94
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
95
96
Some infra noise causes dead job.
97
98
* https://tracker.ceph.com/issues/52624
99
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
100
* https://tracker.ceph.com/issues/58992
101
    test_acls (tasks.cephfs.test_acls.TestACLs)
102 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58340
103 107 Patrick Donnelly
    fsstress.sh failed with errno 124 
104 103 Patrick Donnelly
* https://tracker.ceph.com/issues/48773
105 107 Patrick Donnelly
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
106 103 Patrick Donnelly
* https://tracker.ceph.com/issues/50527
107
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
108
109 100 Venky Shankar
h3. 2023 August 8
110
111
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
112
113
* https://tracker.ceph.com/issues/52624
114
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
115 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
116
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
117 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
118
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
119
* https://tracker.ceph.com/issues/51964
120
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
121
* https://tracker.ceph.com/issues/58992
122
    test_acls (tasks.cephfs.test_acls.TestACLs)
123
* https://tracker.ceph.com/issues/62465
124
    pacific (?): LibCephFS.ShutdownRace segmentation fault
125 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
126 100 Venky Shankar
127 104 Patrick Donnelly
h3. 2023 August 03
128
129
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
130
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
131
132
* https://tracker.ceph.com/issues/52624
133
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
134
* https://tracker.ceph.com/issues/58992
135
    test_acls (tasks.cephfs.test_acls.TestACLs)
136
137 99 Venky Shankar
h3. 2023 July 25
138
139
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
140
141
* https://tracker.ceph.com/issues/52624
142
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
143
* https://tracker.ceph.com/issues/58992
144
    test_acls (tasks.cephfs.test_acls.TestACLs)
145
* https://tracker.ceph.com/issues/50223
146
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
147
* https://tracker.ceph.com/issues/62160
148
    mds: MDS abort because newly corrupt dentry to be committed
149
* https://tracker.ceph.com/issues/61201
150
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
151
152
153 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
154
155
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
156 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
157 94 Kotresh Hiremath Ravishankar
158
* https://tracker.ceph.com/issues/52624
159
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
160 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
161 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
162
* https://tracker.ceph.com/issues/58340
163
  fsstress.sh failed with errno 124 
164 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
165 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
166
* https://tracker.ceph.com/issues/58992
167
  test_acls (tasks.cephfs.test_acls.TestACLs)
168
* https://tracker.ceph.com/issues/54462
169 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
170 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
171 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
172 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
173 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
174
 
175 94 Kotresh Hiremath Ravishankar
176
177 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
178
179
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
180
181
* https://tracker.ceph.com/issues/52624
182
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
183
* https://tracker.ceph.com/issues/51964
184
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
185
* https://tracker.ceph.com/issues/48773
186
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
187
* https://tracker.ceph.com/issues/50223
188
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
189 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
190
  test_acls (tasks.cephfs.test_acls.TestACLs)
191 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
192
  fsstress.sh failed with errno 124
193 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
194
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
195
* https://tracker.ceph.com/issues/55446
196
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
197 91 Kotresh Hiremath Ravishankar
198 90 Rishabh Dave
h3. 2023 May 4
199
200
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
201
202
* https://tracker.ceph.com/issues/59560
203
  qa: RuntimeError: more than one file system available
204
* https://tracker.ceph.com/issues/59626
205
  FSMissing: File system xxxx does not exist in the map
206
* https://tracker.ceph.com/issues/58340
207
  fsstress.sh failed with errno 124
208
* https://tracker.ceph.com/issues/58992
209
  test_acls
210
* https://tracker.ceph.com/issues/48773
211
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
212
* https://tracker.ceph.com/issues/57676
213
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
214
215
216 86 Rishabh Dave
h3. 2023 Apr 13
217
218
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
219
220
https://tracker.ceph.com/issues/52624
221
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
222
* https://tracker.ceph.com/issues/57594
223
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
224
* https://tracker.ceph.com/issues/54108
225
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
226
* https://tracker.ceph.com/issues/58340
227
  fsstress.sh failed with errno 125
228
* https://tracker.ceph.com/issues/54462
229
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
230
* https://tracker.ceph.com/issues/49287     
231
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
232
* https://tracker.ceph.com/issues/58726
233
  test_acls: expected a yum based or a apt based system
234 40 Ramana Raja
235 84 Venky Shankar
h3. 2022 Dec 07
236
237
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
238
239
many transient git.ceph.com related timeouts
240
241
* https://tracker.ceph.com/issues/52624
242
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
243
* https://tracker.ceph.com/issues/50224
244
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
245
* https://tracker.ceph.com/issues/56644
246
    qa: test_rapid_creation fails with "No space left on device"
247
* https://tracker.ceph.com/issues/58221
248
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
249
250 83 Venky Shankar
h3. 2022 Dec 02
251
252
many transient git.ceph.com related timeouts
253
many transient 'Failed to connect to the host via ssh' failures
254
255
* https://tracker.ceph.com/issues/57723
256
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
257
* https://tracker.ceph.com/issues/52624
258
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
259
260 82 Venky Shankar
h3. 2022 Dec 01
261
262
many transient git.ceph.com related timeouts
263
264
* https://tracker.ceph.com/issues/57723
265
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
266
* https://tracker.ceph.com/issues/52624
267
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
268
269 80 Patrick Donnelly
h3. 2022 Nov 18
270
271
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
272
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
273
274
2 ansible dead failures.
275
12 transient git.ceph.com related timeouts
276
277
* https://tracker.ceph.com/issues/57723
278
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
279
* https://tracker.ceph.com/issues/52624
280
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
281
282 78 Milind Changire
h3. 2022 Oct 19
283
284
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
285
286
* https://tracker.ceph.com/issues/57723
287
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
288
* https://tracker.ceph.com/issues/52624
289
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
290
* https://tracker.ceph.com/issues/56644
291
  qa: test_rapid_creation fails with "No space left on device"
292
* https://tracker.ceph.com/issues/54460
293
  snaptest-multiple-capsnaps.sh test failure
294
* https://tracker.ceph.com/issues/57892
295
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
296
297
298
299 76 Venky Shankar
h3. 2022 Oct 06
300
301
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
302
303
* https://tracker.ceph.com/issues/52624
304
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
305
* https://tracker.ceph.com/issues/50223
306
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
307 77 Milind Changire
* https://tracker.ceph.com/issues/56507
308
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
309 76 Venky Shankar
310 75 Venky Shankar
h3. 2022 Sep 27
311
312
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
313
314
* https://tracker.ceph.com/issues/52624
315
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
316
* https://tracker.ceph.com/issues/48773
317
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
318
* https://tracker.ceph.com/issues/50224
319
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
320
* https://tracker.ceph.com/issues/56507
321
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
322
* https://tracker.ceph.com/issues/50223
323
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
324
325 74 Venky Shankar
h3. 2022 Sep 22
326
327
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
328
329
* https://tracker.ceph.com/issues/52624
330
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
331
* https://tracker.ceph.com/issues/51282
332
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
333
* https://tracker.ceph.com/issues/53360
334
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
335
336 73 Venky Shankar
h3. 2022 Sep 19
337
338
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
339
340
* https://tracker.ceph.com/issues/52624
341
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
342
* https://tracker.ceph.com/issues/57594
343
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
344
345 71 Venky Shankar
h3. 2022 Sep 15
346
347
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
348
349
* https://tracker.ceph.com/issues/51282
350
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
351
* https://tracker.ceph.com/issues/52624
352
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
353
* https://tracker.ceph.com/issues/53360
354
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
355 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
356
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
357 71 Venky Shankar
358 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
359
360
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
361
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
362
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
363
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
364
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
365
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
366
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
367
368
369
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
370
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
371
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
372
373
374 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
375
376
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
377
* Most of the failures are passed in re-run. Please check rerun failures below.
378
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
379
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
380
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
381
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
382
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
383
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
384
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
385
386
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
387
* https://tracker.ceph.com/issues/52624
388
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
389
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
390
* https://tracker.ceph.com/issues/57083
391
* https://tracker.ceph.com/issues/53360
392
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
393
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
394
* https://tracker.ceph.com/issues/51183
395
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
396
* https://tracker.ceph.com/issues/56507
397
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
398
399 68 Kotresh Hiremath Ravishankar
400
h3. 2022 AUG 04
401
402
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
403
404
* https://tracker.ceph.com/issues/57087
405
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
406
* https://tracker.ceph.com/issues/52624
407
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
408
* https://tracker.ceph.com/issues/51267
409
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
410
* https://tracker.ceph.com/issues/53360
411
* https://tracker.ceph.com/issues/57083
412
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
413
* https://tracker.ceph.com/issues/56507
414
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
415
416
417 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
418
419
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
420
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
421
422
* https://tracker.ceph.com/issues/52624
423
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
424
* https://tracker.ceph.com/issues/57083
425
* https://tracker.ceph.com/issues/53360
426
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
427
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
428
* https://tracker.ceph.com/issues/51183
429
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
430
* https://tracker.ceph.com/issues/56507
431
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
432
433
434 66 Venky Shankar
h3. 2022 July 08
435
436
* https://tracker.ceph.com/issues/52624
437
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
438
* https://tracker.ceph.com/issues/53360
439
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
440
* https://tracker.ceph.com/issues/51183
441
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
442
* https://tracker.ceph.com/issues/56506
443
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
444
* https://tracker.ceph.com/issues/56507
445
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
446
447 65 Venky Shankar
h3. 2022 Jun 28
448
449
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
450
451
* https://tracker.ceph.com/issues/52624
452
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
453
* https://tracker.ceph.com/issues/53360
454
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
455
* https://tracker.ceph.com/issues/51183
456
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
457
458 64 Venky Shankar
h3. 2022 Jun 22
459
460
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
461
462
* https://tracker.ceph.com/issues/52624
463
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
464
* https://tracker.ceph.com/issues/53360
465
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
466
* https://tracker.ceph.com/issues/51183
467
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
468
469 63 Venky Shankar
h3. 2022 Jun 17
470
471
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
472
473
* https://tracker.ceph.com/issues/52624
474
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
475
* https://tracker.ceph.com/issues/53360
476
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
477
* https://tracker.ceph.com/issues/51183
478
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
479
480 62 Venky Shankar
h3. 2022 Jun 16
481
482
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
483
484
* https://tracker.ceph.com/issues/52624
485
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
486
* https://tracker.ceph.com/issues/53360
487
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
488
* https://tracker.ceph.com/issues/55449
489
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
490
* https://tracker.ceph.com/issues/51267
491
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
492
* https://tracker.ceph.com/issues/55332
493
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
494
495 61 Venky Shankar
h3. 2022 Jun 15
496
497
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
498
499
* https://tracker.ceph.com/issues/52624
500
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
501
* https://tracker.ceph.com/issues/53360
502
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
503
* https://tracker.ceph.com/issues/55449
504
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
505
506
507 60 Venky Shankar
h3. 2022 Jun 10
508
509
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
510
511
* https://tracker.ceph.com/issues/52624
512
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
513
* https://tracker.ceph.com/issues/53360
514
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
515
* https://tracker.ceph.com/issues/55449
516
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
517
518 59 Venky Shankar
h3. 2022 Jun 09
519
520
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
521
522
* https://tracker.ceph.com/issues/52624
523
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
524
* https://tracker.ceph.com/issues/53360
525
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
526
* https://tracker.ceph.com/issues/55449
527
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
528
529 58 Venky Shankar
h3. 2022 May 06
530
531
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
532
533
* https://tracker.ceph.com/issues/52624
534
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
535
* https://tracker.ceph.com/issues/53360
536
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
537
538 57 Venky Shankar
h3. 2022 April 18
539
540
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
541
(only mgr/snap_schedule backport pr)
542
543
* https://tracker.ceph.com/issues/52624
544
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
545 56 Venky Shankar
546
h3. 2022 March 28
547
548
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
549
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
550
551
* https://tracker.ceph.com/issues/52624
552
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
553
* https://tracker.ceph.com/issues/53360
554
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
555
* https://tracker.ceph.com/issues/54411
556
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
557
	33 daemons have recently crashed" during suites/fsstress.sh
558
559 55 Venky Shankar
h3. 2022 March 25
560
561
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
562
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
563
564
* https://tracker.ceph.com/issues/52624
565
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
566
* https://tracker.ceph.com/issues/52606
567
        qa: test_dirfrag_limit
568
* https://tracker.ceph.com/issues/51905
569
        qa: "error reading sessionmap 'mds1_sessionmap'"
570
* https://tracker.ceph.com/issues/53360
571
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
572
* https://tracker.ceph.com/issues/51183
573
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
574 47 Patrick Donnelly
575 54 Venky Shankar
h3. 2022 March 22
576
577
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
578
579
* https://tracker.ceph.com/issues/52624
580
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
581
* https://tracker.ceph.com/issues/52606
582
        qa: test_dirfrag_limit
583
* https://tracker.ceph.com/issues/51183
584
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
585
* https://tracker.ceph.com/issues/51905
586
        qa: "error reading sessionmap 'mds1_sessionmap'"
587
* https://tracker.ceph.com/issues/53360
588
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
589
* https://tracker.ceph.com/issues/54411
590
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
591
	33 daemons have recently crashed" during suites/fsstress.sh
592 47 Patrick Donnelly
593 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
594
595
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
596
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
597
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
598
599
600
* https://tracker.ceph.com/issues/53300
601
	qa: cluster [WRN] Scrub error on inode
602
* https://tracker.ceph.com/issues/53302
603
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
604
* https://tracker.ceph.com/issues/53314
605
	qa: fs/upgrade/mds_upgrade_sequence test timeout
606
* https://tracker.ceph.com/issues/53316
607
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
608
* https://tracker.ceph.com/issues/52624
609
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
610
* https://tracker.ceph.com/issues/52396
611
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
612
* https://tracker.ceph.com/issues/52875
613
	pacific: qa: test_dirfrag_limit
614
* https://tracker.ceph.com/issues/51705
615 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
616
* https://tracker.ceph.com/issues/39634
617
	qa: test_full_same_file timeout
618 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
619
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
620 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
621
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
622 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
623
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
624 48 Kotresh Hiremath Ravishankar
625 47 Patrick Donnelly
626
h3. 2021 November 20
627
628
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
629
630
* https://tracker.ceph.com/issues/53360
631
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
632 46 Kotresh Hiremath Ravishankar
633 41 Patrick Donnelly
h3. 2021 September 14 (QE)
634
635
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
636 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
637 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
638 41 Patrick Donnelly
639
* https://tracker.ceph.com/issues/52606
640
    qa: test_dirfrag_limit
641
* https://tracker.ceph.com/issues/52607
642
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
643 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
644
    qa: tasks.cephfs.fuse_mount:mount command failed
645 41 Patrick Donnelly
646 40 Ramana Raja
h3. 2021 Sep 7
647
648
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
649
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
650
651
652
* https://tracker.ceph.com/issues/52396
653
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
654
* https://tracker.ceph.com/issues/51705
655
    qa: tasks.cephfs.fuse_mount:mount command failed
656 4 Patrick Donnelly
657 34 Ramana Raja
h3. 2021 Aug 30
658
659
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
660
661
* https://tracker.ceph.com/issues/45434
662
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
663
* https://tracker.ceph.com/issues/51705
664
    qa: tasks.cephfs.fuse_mount:mount command failed
665
* https://tracker.ceph.com/issues/52396
666
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
667 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
668 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
669
* https://tracker.ceph.com/issues/51267
670 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
671
* https://tracker.ceph.com/issues/48772
672
   qa: pjd: not ok 9, 44, 80
673 35 Ramana Raja
674 34 Ramana Raja
675 31 Ramana Raja
h3. 2021 Aug 23
676
677
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
678
679
* https://tracker.ceph.com/issues/45434
680
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
681
* https://tracker.ceph.com/issues/51705
682
    qa: tasks.cephfs.fuse_mount:mount command failed
683 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
684 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
685
* https://tracker.ceph.com/issues/52397
686
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
687
688 31 Ramana Raja
689 29 Ramana Raja
h3. 2021 Aug 11
690
691
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
692
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
693
694
* https://tracker.ceph.com/issues/45434
695
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
696
* https://tracker.ceph.com/issues/51705
697
    qa: tasks.cephfs.fuse_mount:mount command failed
698 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
699
    osd: 5.2s0 deep-scrub : stat mismatch
700 29 Ramana Raja
701 19 Jos Collin
h3. 2021 July 15
702 20 Jos Collin
703 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
704
705 26 Jos Collin
* https://tracker.ceph.com/issues/45434
706
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
707
* https://tracker.ceph.com/issues/51705
708
    qa: tasks.cephfs.fuse_mount:mount command failed
709 27 Jos Collin
* https://tracker.ceph.com/issues/51183
710
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
711 28 Jos Collin
* https://tracker.ceph.com/issues/50528
712
    qa: fs:thrash: pjd suite not ok 80
713
* https://tracker.ceph.com/issues/51706
714
    qa: osd deep-scrub stat mismatch
715 26 Jos Collin
716 19 Jos Collin
h3. 2021 July 13
717 20 Jos Collin
718 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
719
720 21 Jos Collin
* https://tracker.ceph.com/issues/51704
721
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
722 23 Jos Collin
* https://tracker.ceph.com/issues/45434
723
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
724 24 Jos Collin
* https://tracker.ceph.com/issues/51705
725 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
726
* https://tracker.ceph.com/issues/48640
727 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
728 21 Jos Collin
729 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
730
731
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
732
733
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
734
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
735
736
* https://tracker.ceph.com/issues/45434
737
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
738
* https://tracker.ceph.com/issues/50260
739
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
740
* https://tracker.ceph.com/issues/51183
741
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
742
743
744 16 Jeff Layton
h3. 2021 June 28
745
746
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
747
748
* https://tracker.ceph.com/issues/45434
749
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
750
* https://tracker.ceph.com/issues/51440
751 17 Jeff Layton
    fallocate fails with EACCES
752 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
753
    TestVolumeClient failure
754
* https://tracker.ceph.com/issues/51266
755
    test cleanup failure
756
* https://tracker.ceph.com/issues/51183
757
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
758
759 12 Jeff Layton
h3. 2021 June 14
760
761
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
762
 
763
* https://tracker.ceph.com/issues/45434
764
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
765
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
766
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
767
* https://tracker.ceph.com/issues/51263
768
    pjdfstest rename test 10.t failed with EACCES
769
* https://tracker.ceph.com/issues/51264
770
    TestVolumeClient failure
771 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
772
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
773 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
774
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
775 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
776
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
777 11 Patrick Donnelly
778
h3. 2021 June 07 (Integration Branch)
779
780
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
781
782
* https://tracker.ceph.com/issues/45434
783
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
784
* https://tracker.ceph.com/issues/50279
785
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
786
* https://tracker.ceph.com/issues/48773
787
    qa: scrub does not complete
788
* https://tracker.ceph.com/issues/51170
789
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
790
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
791
    qa: quota failure
792
793
794 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
795
796
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
797
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
798
799
* https://tracker.ceph.com/issues/45434
800
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
801
* https://tracker.ceph.com/issues/50258
802
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
803
* https://tracker.ceph.com/issues/50260
804
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
805
* https://tracker.ceph.com/issues/49962
806
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
807
* https://tracker.ceph.com/issues/50016
808
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
809
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
810
    qa: quota failure
811
* https://tracker.ceph.com/issues/50528
812
    pacific: qa: fs:thrash: pjd suite not ok 20
813
814
815 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
816
817
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
818
819
* https://tracker.ceph.com/issues/50527
820
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
821
* https://tracker.ceph.com/issues/50528
822
    pacific: qa: fs:thrash: pjd suite not ok 20
823
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
824
    qa: "Assertion `cb_done' failed."
825
* https://tracker.ceph.com/issues/49500
826
    qa: "Assertion `cb_done' failed."
827
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
828
    qa: quota failure
829
* https://tracker.ceph.com/issues/45434
830
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
831
* https://tracker.ceph.com/issues/50279
832
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
833
* https://tracker.ceph.com/issues/50258
834
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
835
* https://tracker.ceph.com/issues/49962
836
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
837
* https://tracker.ceph.com/issues/49962
838
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
839
* https://tracker.ceph.com/issues/50530
840
    pacific: client: abort after MDS blocklist
841
842 2 Patrick Donnelly
843
h3. 2021 Apr 21 (Integration Branch)
844
845
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
846
847
* https://tracker.ceph.com/issues/45434
848
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
849
* https://tracker.ceph.com/issues/50250
850
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
851
* https://tracker.ceph.com/issues/50258
852
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
853
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
854
    qa: quota failure
855
* https://tracker.ceph.com/issues/50016
856
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
857
* https://tracker.ceph.com/issues/50495
858
    pacific: client: shutdown race fails with status 141
859
860
861 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
862
863
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
864
865
* https://tracker.ceph.com/issues/45434
866
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
867
* https://tracker.ceph.com/issues/48805
868
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
869
* https://tracker.ceph.com/issues/49500
870
    qa: "Assertion `cb_done' failed."
871
* https://tracker.ceph.com/issues/50258 (new)
872
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
873
* https://tracker.ceph.com/issues/49962
874
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
875
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
876
    qa: quota failure
877
* https://tracker.ceph.com/issues/50260
878
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
879
* https://tracker.ceph.com/issues/50016
880
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"