Project

General

Profile

Pacific » History » Version 114

Jos Collin, 09/12/2023 05:27 AM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7
* Sep: Jos
8
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 114 Jos Collin
h3. 2023 September 12
16
17
* https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
18
  Trackers are already created for test failures.
19
20 111 Patrick Donnelly
h3. 2023 August 31
21
22 113 Patrick Donnelly
https://github.com/ceph/ceph/pull/53189
23
https://github.com/ceph/ceph/pull/53243
24
https://github.com/ceph/ceph/pull/53185
25
https://github.com/ceph/ceph/pull/52744
26
https://github.com/ceph/ceph/pull/51045
27
28 111 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/
29
30 112 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
31
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
32
* https://tracker.ceph.com/issues/52624
33
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
34
* https://tracker.ceph.com/issues/54462
35
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
36
* https://tracker.ceph.com/issues/50222
37
    osd: 5.2s0 deep-scrub : stat mismatch
38
* https://tracker.ceph.com/issues/50250
39
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
40
41
Some spurious infrastructure / valgrind noise during cleanup.
42 111 Patrick Donnelly
43 107 Patrick Donnelly
h3. 2023 August 22
44
45
Pacific v16.2.14 QA
46
47
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
48
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
49
50
* https://tracker.ceph.com/issues/62578
51
    mon: osd pg-upmap-items command causes PG_DEGRADED warnings
52 109 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
53
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
54 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
55
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
56 107 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
57
    test_acls (tasks.cephfs.test_acls.TestACLs)
58 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62579
59
    client: evicted warning because client completes unmount before thrashed MDS comes back
60 110 Patrick Donnelly
* https://tracker.ceph.com/issues/62580
61
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
62 107 Patrick Donnelly
63 106 Patrick Donnelly
h3. 2023 August 16-2
64
65
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
66
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
67
68
* https://tracker.ceph.com/issues/52624
69
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
70
* https://tracker.ceph.com/issues/58992
71
    test_acls (tasks.cephfs.test_acls.TestACLs)
72
* https://tracker.ceph.com/issues/62501
73
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
74
75 105 Patrick Donnelly
h3. 2023 August 16
76
77
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
78
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
79
80
* https://tracker.ceph.com/issues/62499
81
    testing (?): deadlock ffsb task
82
* https://tracker.ceph.com/issues/62501
83
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
84
85 103 Patrick Donnelly
h3. 2023 August 11
86
87
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
88
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
89
90
Some infra noise causes dead job.
91
92
* https://tracker.ceph.com/issues/52624
93
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
94
* https://tracker.ceph.com/issues/58992
95
    test_acls (tasks.cephfs.test_acls.TestACLs)
96 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58340
97 107 Patrick Donnelly
    fsstress.sh failed with errno 124 
98 103 Patrick Donnelly
* https://tracker.ceph.com/issues/48773
99 107 Patrick Donnelly
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
100 103 Patrick Donnelly
* https://tracker.ceph.com/issues/50527
101
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
102
103 100 Venky Shankar
h3. 2023 August 8
104
105
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
106
107
* https://tracker.ceph.com/issues/52624
108
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
109 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
110
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
111 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
112
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
113
* https://tracker.ceph.com/issues/51964
114
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
115
* https://tracker.ceph.com/issues/58992
116
    test_acls (tasks.cephfs.test_acls.TestACLs)
117
* https://tracker.ceph.com/issues/62465
118
    pacific (?): LibCephFS.ShutdownRace segmentation fault
119 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
120 100 Venky Shankar
121 104 Patrick Donnelly
h3. 2023 August 03
122
123
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
124
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
125
126
* https://tracker.ceph.com/issues/52624
127
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
128
* https://tracker.ceph.com/issues/58992
129
    test_acls (tasks.cephfs.test_acls.TestACLs)
130
131 99 Venky Shankar
h3. 2023 July 25
132
133
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
134
135
* https://tracker.ceph.com/issues/52624
136
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
137
* https://tracker.ceph.com/issues/58992
138
    test_acls (tasks.cephfs.test_acls.TestACLs)
139
* https://tracker.ceph.com/issues/50223
140
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
141
* https://tracker.ceph.com/issues/62160
142
    mds: MDS abort because newly corrupt dentry to be committed
143
* https://tracker.ceph.com/issues/61201
144
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
145
146
147 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
148
149
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
150 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
151 94 Kotresh Hiremath Ravishankar
152
* https://tracker.ceph.com/issues/52624
153
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
154 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
155 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
156
* https://tracker.ceph.com/issues/58340
157
  fsstress.sh failed with errno 124 
158 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
159 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
160
* https://tracker.ceph.com/issues/58992
161
  test_acls (tasks.cephfs.test_acls.TestACLs)
162
* https://tracker.ceph.com/issues/54462
163 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
164 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
165 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
166 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
167 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
168
 
169 94 Kotresh Hiremath Ravishankar
170
171 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
172
173
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
174
175
* https://tracker.ceph.com/issues/52624
176
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
177
* https://tracker.ceph.com/issues/51964
178
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
179
* https://tracker.ceph.com/issues/48773
180
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
181
* https://tracker.ceph.com/issues/50223
182
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
183 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
184
  test_acls (tasks.cephfs.test_acls.TestACLs)
185 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
186
  fsstress.sh failed with errno 124
187 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
188
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
189
* https://tracker.ceph.com/issues/55446
190
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
191 91 Kotresh Hiremath Ravishankar
192 90 Rishabh Dave
h3. 2023 May 4
193
194
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
195
196
* https://tracker.ceph.com/issues/59560
197
  qa: RuntimeError: more than one file system available
198
* https://tracker.ceph.com/issues/59626
199
  FSMissing: File system xxxx does not exist in the map
200
* https://tracker.ceph.com/issues/58340
201
  fsstress.sh failed with errno 124
202
* https://tracker.ceph.com/issues/58992
203
  test_acls
204
* https://tracker.ceph.com/issues/48773
205
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
206
* https://tracker.ceph.com/issues/57676
207
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
208
209
210 86 Rishabh Dave
h3. 2023 Apr 13
211
212
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
213
214
https://tracker.ceph.com/issues/52624
215
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
216
* https://tracker.ceph.com/issues/57594
217
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
218
* https://tracker.ceph.com/issues/54108
219
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
220
* https://tracker.ceph.com/issues/58340
221
  fsstress.sh failed with errno 125
222
* https://tracker.ceph.com/issues/54462
223
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
224
* https://tracker.ceph.com/issues/49287     
225
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
226
* https://tracker.ceph.com/issues/58726
227
  test_acls: expected a yum based or a apt based system
228 40 Ramana Raja
229 84 Venky Shankar
h3. 2022 Dec 07
230
231
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
232
233
many transient git.ceph.com related timeouts
234
235
* https://tracker.ceph.com/issues/52624
236
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
237
* https://tracker.ceph.com/issues/50224
238
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
239
* https://tracker.ceph.com/issues/56644
240
    qa: test_rapid_creation fails with "No space left on device"
241
* https://tracker.ceph.com/issues/58221
242
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
243
244 83 Venky Shankar
h3. 2022 Dec 02
245
246
many transient git.ceph.com related timeouts
247
many transient 'Failed to connect to the host via ssh' failures
248
249
* https://tracker.ceph.com/issues/57723
250
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
251
* https://tracker.ceph.com/issues/52624
252
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
253
254 82 Venky Shankar
h3. 2022 Dec 01
255
256
many transient git.ceph.com related timeouts
257
258
* https://tracker.ceph.com/issues/57723
259
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
260
* https://tracker.ceph.com/issues/52624
261
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
262
263 80 Patrick Donnelly
h3. 2022 Nov 18
264
265
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
266
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
267
268
2 ansible dead failures.
269
12 transient git.ceph.com related timeouts
270
271
* https://tracker.ceph.com/issues/57723
272
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
273
* https://tracker.ceph.com/issues/52624
274
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
275
276 78 Milind Changire
h3. 2022 Oct 19
277
278
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
279
280
* https://tracker.ceph.com/issues/57723
281
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
282
* https://tracker.ceph.com/issues/52624
283
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
284
* https://tracker.ceph.com/issues/56644
285
  qa: test_rapid_creation fails with "No space left on device"
286
* https://tracker.ceph.com/issues/54460
287
  snaptest-multiple-capsnaps.sh test failure
288
* https://tracker.ceph.com/issues/57892
289
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
290
291
292
293 76 Venky Shankar
h3. 2022 Oct 06
294
295
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
296
297
* https://tracker.ceph.com/issues/52624
298
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
299
* https://tracker.ceph.com/issues/50223
300
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
301 77 Milind Changire
* https://tracker.ceph.com/issues/56507
302
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
303 76 Venky Shankar
304 75 Venky Shankar
h3. 2022 Sep 27
305
306
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
307
308
* https://tracker.ceph.com/issues/52624
309
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
310
* https://tracker.ceph.com/issues/48773
311
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
312
* https://tracker.ceph.com/issues/50224
313
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
314
* https://tracker.ceph.com/issues/56507
315
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
316
* https://tracker.ceph.com/issues/50223
317
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
318
319 74 Venky Shankar
h3. 2022 Sep 22
320
321
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
322
323
* https://tracker.ceph.com/issues/52624
324
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
325
* https://tracker.ceph.com/issues/51282
326
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
327
* https://tracker.ceph.com/issues/53360
328
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
329
330 73 Venky Shankar
h3. 2022 Sep 19
331
332
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
333
334
* https://tracker.ceph.com/issues/52624
335
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
336
* https://tracker.ceph.com/issues/57594
337
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
338
339 71 Venky Shankar
h3. 2022 Sep 15
340
341
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
342
343
* https://tracker.ceph.com/issues/51282
344
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
345
* https://tracker.ceph.com/issues/52624
346
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
347
* https://tracker.ceph.com/issues/53360
348
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
349 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
350
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
351 71 Venky Shankar
352 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
353
354
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
355
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
356
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
357
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
358
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
359
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
360
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
361
362
363
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
364
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
365
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
366
367
368 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
369
370
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
371
* Most of the failures are passed in re-run. Please check rerun failures below.
372
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
373
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
374
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
375
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
376
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
377
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
378
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
379
380
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
381
* https://tracker.ceph.com/issues/52624
382
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
383
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
384
* https://tracker.ceph.com/issues/57083
385
* https://tracker.ceph.com/issues/53360
386
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
387
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
388
* https://tracker.ceph.com/issues/51183
389
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
390
* https://tracker.ceph.com/issues/56507
391
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
392
393 68 Kotresh Hiremath Ravishankar
394
h3. 2022 AUG 04
395
396
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
397
398
* https://tracker.ceph.com/issues/57087
399
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
400
* https://tracker.ceph.com/issues/52624
401
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
402
* https://tracker.ceph.com/issues/51267
403
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
404
* https://tracker.ceph.com/issues/53360
405
* https://tracker.ceph.com/issues/57083
406
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
407
* https://tracker.ceph.com/issues/56507
408
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
409
410
411 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
412
413
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
414
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
415
416
* https://tracker.ceph.com/issues/52624
417
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
418
* https://tracker.ceph.com/issues/57083
419
* https://tracker.ceph.com/issues/53360
420
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
421
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
422
* https://tracker.ceph.com/issues/51183
423
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
424
* https://tracker.ceph.com/issues/56507
425
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
426
427
428 66 Venky Shankar
h3. 2022 July 08
429
430
* https://tracker.ceph.com/issues/52624
431
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
432
* https://tracker.ceph.com/issues/53360
433
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
434
* https://tracker.ceph.com/issues/51183
435
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
436
* https://tracker.ceph.com/issues/56506
437
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
438
* https://tracker.ceph.com/issues/56507
439
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
440
441 65 Venky Shankar
h3. 2022 Jun 28
442
443
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
444
445
* https://tracker.ceph.com/issues/52624
446
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
447
* https://tracker.ceph.com/issues/53360
448
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
449
* https://tracker.ceph.com/issues/51183
450
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
451
452 64 Venky Shankar
h3. 2022 Jun 22
453
454
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
455
456
* https://tracker.ceph.com/issues/52624
457
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
458
* https://tracker.ceph.com/issues/53360
459
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
460
* https://tracker.ceph.com/issues/51183
461
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
462
463 63 Venky Shankar
h3. 2022 Jun 17
464
465
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
466
467
* https://tracker.ceph.com/issues/52624
468
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
469
* https://tracker.ceph.com/issues/53360
470
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
471
* https://tracker.ceph.com/issues/51183
472
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
473
474 62 Venky Shankar
h3. 2022 Jun 16
475
476
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
477
478
* https://tracker.ceph.com/issues/52624
479
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
480
* https://tracker.ceph.com/issues/53360
481
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
482
* https://tracker.ceph.com/issues/55449
483
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
484
* https://tracker.ceph.com/issues/51267
485
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
486
* https://tracker.ceph.com/issues/55332
487
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
488
489 61 Venky Shankar
h3. 2022 Jun 15
490
491
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
492
493
* https://tracker.ceph.com/issues/52624
494
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
495
* https://tracker.ceph.com/issues/53360
496
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
497
* https://tracker.ceph.com/issues/55449
498
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
499
500
501 60 Venky Shankar
h3. 2022 Jun 10
502
503
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
504
505
* https://tracker.ceph.com/issues/52624
506
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
507
* https://tracker.ceph.com/issues/53360
508
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
509
* https://tracker.ceph.com/issues/55449
510
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
511
512 59 Venky Shankar
h3. 2022 Jun 09
513
514
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
515
516
* https://tracker.ceph.com/issues/52624
517
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
518
* https://tracker.ceph.com/issues/53360
519
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
520
* https://tracker.ceph.com/issues/55449
521
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
522
523 58 Venky Shankar
h3. 2022 May 06
524
525
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
526
527
* https://tracker.ceph.com/issues/52624
528
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
529
* https://tracker.ceph.com/issues/53360
530
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
531
532 57 Venky Shankar
h3. 2022 April 18
533
534
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
535
(only mgr/snap_schedule backport pr)
536
537
* https://tracker.ceph.com/issues/52624
538
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
539 56 Venky Shankar
540
h3. 2022 March 28
541
542
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
543
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
544
545
* https://tracker.ceph.com/issues/52624
546
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
547
* https://tracker.ceph.com/issues/53360
548
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
549
* https://tracker.ceph.com/issues/54411
550
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
551
	33 daemons have recently crashed" during suites/fsstress.sh
552
553 55 Venky Shankar
h3. 2022 March 25
554
555
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
556
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
557
558
* https://tracker.ceph.com/issues/52624
559
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
560
* https://tracker.ceph.com/issues/52606
561
        qa: test_dirfrag_limit
562
* https://tracker.ceph.com/issues/51905
563
        qa: "error reading sessionmap 'mds1_sessionmap'"
564
* https://tracker.ceph.com/issues/53360
565
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
566
* https://tracker.ceph.com/issues/51183
567
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
568 47 Patrick Donnelly
569 54 Venky Shankar
h3. 2022 March 22
570
571
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
572
573
* https://tracker.ceph.com/issues/52624
574
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
575
* https://tracker.ceph.com/issues/52606
576
        qa: test_dirfrag_limit
577
* https://tracker.ceph.com/issues/51183
578
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
579
* https://tracker.ceph.com/issues/51905
580
        qa: "error reading sessionmap 'mds1_sessionmap'"
581
* https://tracker.ceph.com/issues/53360
582
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
583
* https://tracker.ceph.com/issues/54411
584
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
585
	33 daemons have recently crashed" during suites/fsstress.sh
586 47 Patrick Donnelly
587 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
588
589
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
590
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
591
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
592
593
594
* https://tracker.ceph.com/issues/53300
595
	qa: cluster [WRN] Scrub error on inode
596
* https://tracker.ceph.com/issues/53302
597
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
598
* https://tracker.ceph.com/issues/53314
599
	qa: fs/upgrade/mds_upgrade_sequence test timeout
600
* https://tracker.ceph.com/issues/53316
601
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
602
* https://tracker.ceph.com/issues/52624
603
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
604
* https://tracker.ceph.com/issues/52396
605
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
606
* https://tracker.ceph.com/issues/52875
607
	pacific: qa: test_dirfrag_limit
608
* https://tracker.ceph.com/issues/51705
609 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
610
* https://tracker.ceph.com/issues/39634
611
	qa: test_full_same_file timeout
612 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
613
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
614 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
615
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
616 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
617
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
618 48 Kotresh Hiremath Ravishankar
619 47 Patrick Donnelly
620
h3. 2021 November 20
621
622
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
623
624
* https://tracker.ceph.com/issues/53360
625
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
626 46 Kotresh Hiremath Ravishankar
627 41 Patrick Donnelly
h3. 2021 September 14 (QE)
628
629
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
630 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
631 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
632 41 Patrick Donnelly
633
* https://tracker.ceph.com/issues/52606
634
    qa: test_dirfrag_limit
635
* https://tracker.ceph.com/issues/52607
636
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
637 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
638
    qa: tasks.cephfs.fuse_mount:mount command failed
639 41 Patrick Donnelly
640 40 Ramana Raja
h3. 2021 Sep 7
641
642
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
643
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
644
645
646
* https://tracker.ceph.com/issues/52396
647
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
648
* https://tracker.ceph.com/issues/51705
649
    qa: tasks.cephfs.fuse_mount:mount command failed
650 4 Patrick Donnelly
651 34 Ramana Raja
h3. 2021 Aug 30
652
653
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
654
655
* https://tracker.ceph.com/issues/45434
656
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
657
* https://tracker.ceph.com/issues/51705
658
    qa: tasks.cephfs.fuse_mount:mount command failed
659
* https://tracker.ceph.com/issues/52396
660
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
661 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
662 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
663
* https://tracker.ceph.com/issues/51267
664 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
665
* https://tracker.ceph.com/issues/48772
666
   qa: pjd: not ok 9, 44, 80
667 35 Ramana Raja
668 34 Ramana Raja
669 31 Ramana Raja
h3. 2021 Aug 23
670
671
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
672
673
* https://tracker.ceph.com/issues/45434
674
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
675
* https://tracker.ceph.com/issues/51705
676
    qa: tasks.cephfs.fuse_mount:mount command failed
677 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
678 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
679
* https://tracker.ceph.com/issues/52397
680
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
681
682 31 Ramana Raja
683 29 Ramana Raja
h3. 2021 Aug 11
684
685
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
686
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
687
688
* https://tracker.ceph.com/issues/45434
689
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
690
* https://tracker.ceph.com/issues/51705
691
    qa: tasks.cephfs.fuse_mount:mount command failed
692 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
693
    osd: 5.2s0 deep-scrub : stat mismatch
694 29 Ramana Raja
695 19 Jos Collin
h3. 2021 July 15
696 20 Jos Collin
697 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
698
699 26 Jos Collin
* https://tracker.ceph.com/issues/45434
700
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
701
* https://tracker.ceph.com/issues/51705
702
    qa: tasks.cephfs.fuse_mount:mount command failed
703 27 Jos Collin
* https://tracker.ceph.com/issues/51183
704
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
705 28 Jos Collin
* https://tracker.ceph.com/issues/50528
706
    qa: fs:thrash: pjd suite not ok 80
707
* https://tracker.ceph.com/issues/51706
708
    qa: osd deep-scrub stat mismatch
709 26 Jos Collin
710 19 Jos Collin
h3. 2021 July 13
711 20 Jos Collin
712 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
713
714 21 Jos Collin
* https://tracker.ceph.com/issues/51704
715
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
716 23 Jos Collin
* https://tracker.ceph.com/issues/45434
717
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
718 24 Jos Collin
* https://tracker.ceph.com/issues/51705
719 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
720
* https://tracker.ceph.com/issues/48640
721 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
722 21 Jos Collin
723 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
724
725
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
726
727
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
728
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
729
730
* https://tracker.ceph.com/issues/45434
731
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
732
* https://tracker.ceph.com/issues/50260
733
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
734
* https://tracker.ceph.com/issues/51183
735
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
736
737
738 16 Jeff Layton
h3. 2021 June 28
739
740
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
741
742
* https://tracker.ceph.com/issues/45434
743
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
744
* https://tracker.ceph.com/issues/51440
745 17 Jeff Layton
    fallocate fails with EACCES
746 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
747
    TestVolumeClient failure
748
* https://tracker.ceph.com/issues/51266
749
    test cleanup failure
750
* https://tracker.ceph.com/issues/51183
751
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
752
753 12 Jeff Layton
h3. 2021 June 14
754
755
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
756
 
757
* https://tracker.ceph.com/issues/45434
758
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
759
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
760
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
761
* https://tracker.ceph.com/issues/51263
762
    pjdfstest rename test 10.t failed with EACCES
763
* https://tracker.ceph.com/issues/51264
764
    TestVolumeClient failure
765 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
766
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
767 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
768
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
769 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
770
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
771 11 Patrick Donnelly
772
h3. 2021 June 07 (Integration Branch)
773
774
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
775
776
* https://tracker.ceph.com/issues/45434
777
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
778
* https://tracker.ceph.com/issues/50279
779
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
780
* https://tracker.ceph.com/issues/48773
781
    qa: scrub does not complete
782
* https://tracker.ceph.com/issues/51170
783
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
784
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
785
    qa: quota failure
786
787
788 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
789
790
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
791
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
792
793
* https://tracker.ceph.com/issues/45434
794
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
795
* https://tracker.ceph.com/issues/50258
796
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
797
* https://tracker.ceph.com/issues/50260
798
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
799
* https://tracker.ceph.com/issues/49962
800
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
801
* https://tracker.ceph.com/issues/50016
802
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
803
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
804
    qa: quota failure
805
* https://tracker.ceph.com/issues/50528
806
    pacific: qa: fs:thrash: pjd suite not ok 20
807
808
809 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
810
811
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
812
813
* https://tracker.ceph.com/issues/50527
814
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
815
* https://tracker.ceph.com/issues/50528
816
    pacific: qa: fs:thrash: pjd suite not ok 20
817
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
818
    qa: "Assertion `cb_done' failed."
819
* https://tracker.ceph.com/issues/49500
820
    qa: "Assertion `cb_done' failed."
821
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
822
    qa: quota failure
823
* https://tracker.ceph.com/issues/45434
824
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
825
* https://tracker.ceph.com/issues/50279
826
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
827
* https://tracker.ceph.com/issues/50258
828
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
829
* https://tracker.ceph.com/issues/49962
830
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
831
* https://tracker.ceph.com/issues/49962
832
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
833
* https://tracker.ceph.com/issues/50530
834
    pacific: client: abort after MDS blocklist
835
836 2 Patrick Donnelly
837
h3. 2021 Apr 21 (Integration Branch)
838
839
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
840
841
* https://tracker.ceph.com/issues/45434
842
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
843
* https://tracker.ceph.com/issues/50250
844
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
845
* https://tracker.ceph.com/issues/50258
846
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
847
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
848
    qa: quota failure
849
* https://tracker.ceph.com/issues/50016
850
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
851
* https://tracker.ceph.com/issues/50495
852
    pacific: client: shutdown race fails with status 141
853
854
855 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
856
857
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
858
859
* https://tracker.ceph.com/issues/45434
860
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
861
* https://tracker.ceph.com/issues/48805
862
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
863
* https://tracker.ceph.com/issues/49500
864
    qa: "Assertion `cb_done' failed."
865
* https://tracker.ceph.com/issues/50258 (new)
866
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
867
* https://tracker.ceph.com/issues/49962
868
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
869
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
870
    qa: quota failure
871
* https://tracker.ceph.com/issues/50260
872
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
873
* https://tracker.ceph.com/issues/50016
874
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"