Project

General

Profile

Pacific » History » Version 121

Rishabh Dave, 11/08/2023 05:05 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7 118 Jos Collin
* Sep: Jos Collin
8 98 Venky Shankar
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 119 Rishabh Dave
h3. ADD NEW ENTRY BELOW
16
17 120 Rishabh Dave
h3. 8 Nov 2023
18 121 Rishabh Dave
19 120 Rishabh Dave
fs: https://pulpito.ceph.com/vshankar-2023-11-06_07:50:57-fs-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
20 121 Rishabh Dave
smoke: https://pulpito.ceph.com/vshankar-2023-11-06_07:53:57-smoke-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
21 120 Rishabh Dave
22
* https://tracker.ceph.com/issues/51964
23
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
24
* https://tracker.ceph.com/issues/62501
25
  pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
26
* https://tracker.ceph.com/issues/52624
27
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
28
29 114 Jos Collin
h3. 2023 September 12
30
31 117 Jos Collin
https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
32 115 Jos Collin
33 117 Jos Collin
  * https://tracker.ceph.com/issues/58992
34
    test_acls (tasks.cephfs.test_acls.TestACLs)
35
  * https://tracker.ceph.com/issues/62580
36
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
37 114 Jos Collin
38 111 Patrick Donnelly
h3. 2023 August 31
39
40 113 Patrick Donnelly
https://github.com/ceph/ceph/pull/53189
41
https://github.com/ceph/ceph/pull/53243
42
https://github.com/ceph/ceph/pull/53185
43
https://github.com/ceph/ceph/pull/52744
44
https://github.com/ceph/ceph/pull/51045
45
46 111 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/
47
48 112 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
49
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
50
* https://tracker.ceph.com/issues/52624
51
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
52
* https://tracker.ceph.com/issues/54462
53
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
54
* https://tracker.ceph.com/issues/50222
55
    osd: 5.2s0 deep-scrub : stat mismatch
56
* https://tracker.ceph.com/issues/50250
57
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
58
59
Some spurious infrastructure / valgrind noise during cleanup.
60 111 Patrick Donnelly
61 107 Patrick Donnelly
h3. 2023 August 22
62
63
Pacific v16.2.14 QA
64
65
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
66
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
67
68
* https://tracker.ceph.com/issues/62578
69
    mon: osd pg-upmap-items command causes PG_DEGRADED warnings
70 109 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
71
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
72 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
73
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
74 107 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
75
    test_acls (tasks.cephfs.test_acls.TestACLs)
76 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62579
77
    client: evicted warning because client completes unmount before thrashed MDS comes back
78 110 Patrick Donnelly
* https://tracker.ceph.com/issues/62580
79
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
80 107 Patrick Donnelly
81 106 Patrick Donnelly
h3. 2023 August 16-2
82
83
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
84
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
85
86
* https://tracker.ceph.com/issues/52624
87
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
88
* https://tracker.ceph.com/issues/58992
89
    test_acls (tasks.cephfs.test_acls.TestACLs)
90
* https://tracker.ceph.com/issues/62501
91
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
92
93 105 Patrick Donnelly
h3. 2023 August 16
94
95
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
96
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
97
98
* https://tracker.ceph.com/issues/62499
99
    testing (?): deadlock ffsb task
100
* https://tracker.ceph.com/issues/62501
101
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
102
103 103 Patrick Donnelly
h3. 2023 August 11
104
105
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
106
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
107
108
Some infra noise causes dead job.
109
110
* https://tracker.ceph.com/issues/52624
111
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
112
* https://tracker.ceph.com/issues/58992
113
    test_acls (tasks.cephfs.test_acls.TestACLs)
114 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58340
115 107 Patrick Donnelly
    fsstress.sh failed with errno 124 
116 103 Patrick Donnelly
* https://tracker.ceph.com/issues/48773
117 107 Patrick Donnelly
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
118 103 Patrick Donnelly
* https://tracker.ceph.com/issues/50527
119
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
120
121 100 Venky Shankar
h3. 2023 August 8
122
123
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
124
125
* https://tracker.ceph.com/issues/52624
126
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
127 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
128
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
129 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
130
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
131
* https://tracker.ceph.com/issues/51964
132
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
133
* https://tracker.ceph.com/issues/58992
134
    test_acls (tasks.cephfs.test_acls.TestACLs)
135
* https://tracker.ceph.com/issues/62465
136
    pacific (?): LibCephFS.ShutdownRace segmentation fault
137 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
138 100 Venky Shankar
139 104 Patrick Donnelly
h3. 2023 August 03
140
141
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
142
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
143
144
* https://tracker.ceph.com/issues/52624
145
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
146
* https://tracker.ceph.com/issues/58992
147
    test_acls (tasks.cephfs.test_acls.TestACLs)
148
149 99 Venky Shankar
h3. 2023 July 25
150
151
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
152
153
* https://tracker.ceph.com/issues/52624
154
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
155
* https://tracker.ceph.com/issues/58992
156
    test_acls (tasks.cephfs.test_acls.TestACLs)
157
* https://tracker.ceph.com/issues/50223
158
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
159
* https://tracker.ceph.com/issues/62160
160
    mds: MDS abort because newly corrupt dentry to be committed
161
* https://tracker.ceph.com/issues/61201
162
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
163
164
165 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
166
167
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
168 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
169 94 Kotresh Hiremath Ravishankar
170
* https://tracker.ceph.com/issues/52624
171
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
172 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
173 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
174
* https://tracker.ceph.com/issues/58340
175
  fsstress.sh failed with errno 124 
176 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
177 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
178
* https://tracker.ceph.com/issues/58992
179
  test_acls (tasks.cephfs.test_acls.TestACLs)
180
* https://tracker.ceph.com/issues/54462
181 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
182 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
183 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
184 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
185 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
186
 
187 94 Kotresh Hiremath Ravishankar
188
189 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
190
191
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
192
193
* https://tracker.ceph.com/issues/52624
194
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
195
* https://tracker.ceph.com/issues/51964
196
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
197
* https://tracker.ceph.com/issues/48773
198
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
199
* https://tracker.ceph.com/issues/50223
200
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
201 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
202
  test_acls (tasks.cephfs.test_acls.TestACLs)
203 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
204
  fsstress.sh failed with errno 124
205 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
206
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
207
* https://tracker.ceph.com/issues/55446
208
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
209 91 Kotresh Hiremath Ravishankar
210 90 Rishabh Dave
h3. 2023 May 4
211
212
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
213
214
* https://tracker.ceph.com/issues/59560
215
  qa: RuntimeError: more than one file system available
216
* https://tracker.ceph.com/issues/59626
217
  FSMissing: File system xxxx does not exist in the map
218
* https://tracker.ceph.com/issues/58340
219
  fsstress.sh failed with errno 124
220
* https://tracker.ceph.com/issues/58992
221
  test_acls
222
* https://tracker.ceph.com/issues/48773
223
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
224
* https://tracker.ceph.com/issues/57676
225
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
226
227
228 86 Rishabh Dave
h3. 2023 Apr 13
229
230
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
231
232
https://tracker.ceph.com/issues/52624
233
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
234
* https://tracker.ceph.com/issues/57594
235
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
236
* https://tracker.ceph.com/issues/54108
237
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
238
* https://tracker.ceph.com/issues/58340
239
  fsstress.sh failed with errno 125
240
* https://tracker.ceph.com/issues/54462
241
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
242
* https://tracker.ceph.com/issues/49287     
243
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
244
* https://tracker.ceph.com/issues/58726
245
  test_acls: expected a yum based or a apt based system
246 40 Ramana Raja
247 84 Venky Shankar
h3. 2022 Dec 07
248
249
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
250
251
many transient git.ceph.com related timeouts
252
253
* https://tracker.ceph.com/issues/52624
254
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
255
* https://tracker.ceph.com/issues/50224
256
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
257
* https://tracker.ceph.com/issues/56644
258
    qa: test_rapid_creation fails with "No space left on device"
259
* https://tracker.ceph.com/issues/58221
260
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
261
262 83 Venky Shankar
h3. 2022 Dec 02
263
264
many transient git.ceph.com related timeouts
265
many transient 'Failed to connect to the host via ssh' failures
266
267
* https://tracker.ceph.com/issues/57723
268
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
269
* https://tracker.ceph.com/issues/52624
270
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
271
272 82 Venky Shankar
h3. 2022 Dec 01
273
274
many transient git.ceph.com related timeouts
275
276
* https://tracker.ceph.com/issues/57723
277
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
278
* https://tracker.ceph.com/issues/52624
279
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
280
281 80 Patrick Donnelly
h3. 2022 Nov 18
282
283
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
284
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
285
286
2 ansible dead failures.
287
12 transient git.ceph.com related timeouts
288
289
* https://tracker.ceph.com/issues/57723
290
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
291
* https://tracker.ceph.com/issues/52624
292
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
293
294 78 Milind Changire
h3. 2022 Oct 19
295
296
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
297
298
* https://tracker.ceph.com/issues/57723
299
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
300
* https://tracker.ceph.com/issues/52624
301
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
302
* https://tracker.ceph.com/issues/56644
303
  qa: test_rapid_creation fails with "No space left on device"
304
* https://tracker.ceph.com/issues/54460
305
  snaptest-multiple-capsnaps.sh test failure
306
* https://tracker.ceph.com/issues/57892
307
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
308
309
310
311 76 Venky Shankar
h3. 2022 Oct 06
312
313
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
314
315
* https://tracker.ceph.com/issues/52624
316
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
317
* https://tracker.ceph.com/issues/50223
318
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
319 77 Milind Changire
* https://tracker.ceph.com/issues/56507
320
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
321 76 Venky Shankar
322 75 Venky Shankar
h3. 2022 Sep 27
323
324
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
325
326
* https://tracker.ceph.com/issues/52624
327
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
328
* https://tracker.ceph.com/issues/48773
329
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
330
* https://tracker.ceph.com/issues/50224
331
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
332
* https://tracker.ceph.com/issues/56507
333
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
334
* https://tracker.ceph.com/issues/50223
335
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
336
337 74 Venky Shankar
h3. 2022 Sep 22
338
339
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
340
341
* https://tracker.ceph.com/issues/52624
342
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
343
* https://tracker.ceph.com/issues/51282
344
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
345
* https://tracker.ceph.com/issues/53360
346
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
347
348 73 Venky Shankar
h3. 2022 Sep 19
349
350
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
351
352
* https://tracker.ceph.com/issues/52624
353
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
354
* https://tracker.ceph.com/issues/57594
355
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
356
357 71 Venky Shankar
h3. 2022 Sep 15
358
359
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
360
361
* https://tracker.ceph.com/issues/51282
362
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
363
* https://tracker.ceph.com/issues/52624
364
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
365
* https://tracker.ceph.com/issues/53360
366
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
367 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
368
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
369 71 Venky Shankar
370 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
371
372
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
373
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
374
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
375
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
376
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
377
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
378
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
379
380
381
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
382
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
383
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
384
385
386 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
387
388
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
389
* Most of the failures are passed in re-run. Please check rerun failures below.
390
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
391
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
392
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
393
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
394
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
395
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
396
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
397
398
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
399
* https://tracker.ceph.com/issues/52624
400
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
401
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
402
* https://tracker.ceph.com/issues/57083
403
* https://tracker.ceph.com/issues/53360
404
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
405
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
406
* https://tracker.ceph.com/issues/51183
407
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
408
* https://tracker.ceph.com/issues/56507
409
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
410
411 68 Kotresh Hiremath Ravishankar
412
h3. 2022 AUG 04
413
414
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
415
416
* https://tracker.ceph.com/issues/57087
417
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
418
* https://tracker.ceph.com/issues/52624
419
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
420
* https://tracker.ceph.com/issues/51267
421
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
422
* https://tracker.ceph.com/issues/53360
423
* https://tracker.ceph.com/issues/57083
424
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
425
* https://tracker.ceph.com/issues/56507
426
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
427
428
429 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
430
431
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
432
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
433
434
* https://tracker.ceph.com/issues/52624
435
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
436
* https://tracker.ceph.com/issues/57083
437
* https://tracker.ceph.com/issues/53360
438
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
439
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
440
* https://tracker.ceph.com/issues/51183
441
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
442
* https://tracker.ceph.com/issues/56507
443
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
444
445
446 66 Venky Shankar
h3. 2022 July 08
447
448
* https://tracker.ceph.com/issues/52624
449
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
450
* https://tracker.ceph.com/issues/53360
451
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
452
* https://tracker.ceph.com/issues/51183
453
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
454
* https://tracker.ceph.com/issues/56506
455
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
456
* https://tracker.ceph.com/issues/56507
457
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
458
459 65 Venky Shankar
h3. 2022 Jun 28
460
461
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
462
463
* https://tracker.ceph.com/issues/52624
464
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
465
* https://tracker.ceph.com/issues/53360
466
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
467
* https://tracker.ceph.com/issues/51183
468
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
469
470 64 Venky Shankar
h3. 2022 Jun 22
471
472
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
473
474
* https://tracker.ceph.com/issues/52624
475
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
476
* https://tracker.ceph.com/issues/53360
477
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
478
* https://tracker.ceph.com/issues/51183
479
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
480
481 63 Venky Shankar
h3. 2022 Jun 17
482
483
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
484
485
* https://tracker.ceph.com/issues/52624
486
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
487
* https://tracker.ceph.com/issues/53360
488
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
489
* https://tracker.ceph.com/issues/51183
490
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
491
492 62 Venky Shankar
h3. 2022 Jun 16
493
494
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
495
496
* https://tracker.ceph.com/issues/52624
497
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
498
* https://tracker.ceph.com/issues/53360
499
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
500
* https://tracker.ceph.com/issues/55449
501
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
502
* https://tracker.ceph.com/issues/51267
503
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
504
* https://tracker.ceph.com/issues/55332
505
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
506
507 61 Venky Shankar
h3. 2022 Jun 15
508
509
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
510
511
* https://tracker.ceph.com/issues/52624
512
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
513
* https://tracker.ceph.com/issues/53360
514
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
515
* https://tracker.ceph.com/issues/55449
516
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
517
518
519 60 Venky Shankar
h3. 2022 Jun 10
520
521
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
522
523
* https://tracker.ceph.com/issues/52624
524
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
525
* https://tracker.ceph.com/issues/53360
526
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
527
* https://tracker.ceph.com/issues/55449
528
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
529
530 59 Venky Shankar
h3. 2022 Jun 09
531
532
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
533
534
* https://tracker.ceph.com/issues/52624
535
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
536
* https://tracker.ceph.com/issues/53360
537
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
538
* https://tracker.ceph.com/issues/55449
539
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
540
541 58 Venky Shankar
h3. 2022 May 06
542
543
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
544
545
* https://tracker.ceph.com/issues/52624
546
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
547
* https://tracker.ceph.com/issues/53360
548
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
549
550 57 Venky Shankar
h3. 2022 April 18
551
552
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
553
(only mgr/snap_schedule backport pr)
554
555
* https://tracker.ceph.com/issues/52624
556
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
557 56 Venky Shankar
558
h3. 2022 March 28
559
560
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
561
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
562
563
* https://tracker.ceph.com/issues/52624
564
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
565
* https://tracker.ceph.com/issues/53360
566
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
567
* https://tracker.ceph.com/issues/54411
568
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
569
	33 daemons have recently crashed" during suites/fsstress.sh
570
571 55 Venky Shankar
h3. 2022 March 25
572
573
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
574
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
575
576
* https://tracker.ceph.com/issues/52624
577
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
578
* https://tracker.ceph.com/issues/52606
579
        qa: test_dirfrag_limit
580
* https://tracker.ceph.com/issues/51905
581
        qa: "error reading sessionmap 'mds1_sessionmap'"
582
* https://tracker.ceph.com/issues/53360
583
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
584
* https://tracker.ceph.com/issues/51183
585
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
586 47 Patrick Donnelly
587 54 Venky Shankar
h3. 2022 March 22
588
589
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
590
591
* https://tracker.ceph.com/issues/52624
592
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
593
* https://tracker.ceph.com/issues/52606
594
        qa: test_dirfrag_limit
595
* https://tracker.ceph.com/issues/51183
596
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
597
* https://tracker.ceph.com/issues/51905
598
        qa: "error reading sessionmap 'mds1_sessionmap'"
599
* https://tracker.ceph.com/issues/53360
600
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
601
* https://tracker.ceph.com/issues/54411
602
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
603
	33 daemons have recently crashed" during suites/fsstress.sh
604 47 Patrick Donnelly
605 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
606
607
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
608
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
609
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
610
611
612
* https://tracker.ceph.com/issues/53300
613
	qa: cluster [WRN] Scrub error on inode
614
* https://tracker.ceph.com/issues/53302
615
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
616
* https://tracker.ceph.com/issues/53314
617
	qa: fs/upgrade/mds_upgrade_sequence test timeout
618
* https://tracker.ceph.com/issues/53316
619
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
620
* https://tracker.ceph.com/issues/52624
621
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
622
* https://tracker.ceph.com/issues/52396
623
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
624
* https://tracker.ceph.com/issues/52875
625
	pacific: qa: test_dirfrag_limit
626
* https://tracker.ceph.com/issues/51705
627 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
628
* https://tracker.ceph.com/issues/39634
629
	qa: test_full_same_file timeout
630 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
631
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
632 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
633
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
634 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
635
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
636 48 Kotresh Hiremath Ravishankar
637 47 Patrick Donnelly
638
h3. 2021 November 20
639
640
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
641
642
* https://tracker.ceph.com/issues/53360
643
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
644 46 Kotresh Hiremath Ravishankar
645 41 Patrick Donnelly
h3. 2021 September 14 (QE)
646
647
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
648 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
649 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
650 41 Patrick Donnelly
651
* https://tracker.ceph.com/issues/52606
652
    qa: test_dirfrag_limit
653
* https://tracker.ceph.com/issues/52607
654
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
655 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
656
    qa: tasks.cephfs.fuse_mount:mount command failed
657 41 Patrick Donnelly
658 40 Ramana Raja
h3. 2021 Sep 7
659
660
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
661
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
662
663
664
* https://tracker.ceph.com/issues/52396
665
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
666
* https://tracker.ceph.com/issues/51705
667
    qa: tasks.cephfs.fuse_mount:mount command failed
668 4 Patrick Donnelly
669 34 Ramana Raja
h3. 2021 Aug 30
670
671
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
672
673
* https://tracker.ceph.com/issues/45434
674
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
675
* https://tracker.ceph.com/issues/51705
676
    qa: tasks.cephfs.fuse_mount:mount command failed
677
* https://tracker.ceph.com/issues/52396
678
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
679 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
680 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
681
* https://tracker.ceph.com/issues/51267
682 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
683
* https://tracker.ceph.com/issues/48772
684
   qa: pjd: not ok 9, 44, 80
685 35 Ramana Raja
686 34 Ramana Raja
687 31 Ramana Raja
h3. 2021 Aug 23
688
689
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
690
691
* https://tracker.ceph.com/issues/45434
692
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
693
* https://tracker.ceph.com/issues/51705
694
    qa: tasks.cephfs.fuse_mount:mount command failed
695 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
696 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
697
* https://tracker.ceph.com/issues/52397
698
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
699
700 31 Ramana Raja
701 29 Ramana Raja
h3. 2021 Aug 11
702
703
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
704
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
705
706
* https://tracker.ceph.com/issues/45434
707
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
708
* https://tracker.ceph.com/issues/51705
709
    qa: tasks.cephfs.fuse_mount:mount command failed
710 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
711
    osd: 5.2s0 deep-scrub : stat mismatch
712 29 Ramana Raja
713 19 Jos Collin
h3. 2021 July 15
714 20 Jos Collin
715 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
716
717 26 Jos Collin
* https://tracker.ceph.com/issues/45434
718
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
719
* https://tracker.ceph.com/issues/51705
720
    qa: tasks.cephfs.fuse_mount:mount command failed
721 27 Jos Collin
* https://tracker.ceph.com/issues/51183
722
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
723 28 Jos Collin
* https://tracker.ceph.com/issues/50528
724
    qa: fs:thrash: pjd suite not ok 80
725
* https://tracker.ceph.com/issues/51706
726
    qa: osd deep-scrub stat mismatch
727 26 Jos Collin
728 19 Jos Collin
h3. 2021 July 13
729 20 Jos Collin
730 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
731
732 21 Jos Collin
* https://tracker.ceph.com/issues/51704
733
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
734 23 Jos Collin
* https://tracker.ceph.com/issues/45434
735
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
736 24 Jos Collin
* https://tracker.ceph.com/issues/51705
737 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
738
* https://tracker.ceph.com/issues/48640
739 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
740 21 Jos Collin
741 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
742
743
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
744
745
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
746
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
747
748
* https://tracker.ceph.com/issues/45434
749
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
750
* https://tracker.ceph.com/issues/50260
751
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
752
* https://tracker.ceph.com/issues/51183
753
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
754
755
756 16 Jeff Layton
h3. 2021 June 28
757
758
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
759
760
* https://tracker.ceph.com/issues/45434
761
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
762
* https://tracker.ceph.com/issues/51440
763 17 Jeff Layton
    fallocate fails with EACCES
764 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
765
    TestVolumeClient failure
766
* https://tracker.ceph.com/issues/51266
767
    test cleanup failure
768
* https://tracker.ceph.com/issues/51183
769
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
770
771 12 Jeff Layton
h3. 2021 June 14
772
773
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
774
 
775
* https://tracker.ceph.com/issues/45434
776
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
777
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
778
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
779
* https://tracker.ceph.com/issues/51263
780
    pjdfstest rename test 10.t failed with EACCES
781
* https://tracker.ceph.com/issues/51264
782
    TestVolumeClient failure
783 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
784
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
785 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
786
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
787 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
788
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
789 11 Patrick Donnelly
790
h3. 2021 June 07 (Integration Branch)
791
792
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
793
794
* https://tracker.ceph.com/issues/45434
795
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
796
* https://tracker.ceph.com/issues/50279
797
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
798
* https://tracker.ceph.com/issues/48773
799
    qa: scrub does not complete
800
* https://tracker.ceph.com/issues/51170
801
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
802
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
803
    qa: quota failure
804
805
806 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
807
808
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
809
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
810
811
* https://tracker.ceph.com/issues/45434
812
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
813
* https://tracker.ceph.com/issues/50258
814
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
815
* https://tracker.ceph.com/issues/50260
816
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
817
* https://tracker.ceph.com/issues/49962
818
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
819
* https://tracker.ceph.com/issues/50016
820
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
821
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
822
    qa: quota failure
823
* https://tracker.ceph.com/issues/50528
824
    pacific: qa: fs:thrash: pjd suite not ok 20
825
826
827 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
828
829
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
830
831
* https://tracker.ceph.com/issues/50527
832
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
833
* https://tracker.ceph.com/issues/50528
834
    pacific: qa: fs:thrash: pjd suite not ok 20
835
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
836
    qa: "Assertion `cb_done' failed."
837
* https://tracker.ceph.com/issues/49500
838
    qa: "Assertion `cb_done' failed."
839
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
840
    qa: quota failure
841
* https://tracker.ceph.com/issues/45434
842
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
843
* https://tracker.ceph.com/issues/50279
844
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
845
* https://tracker.ceph.com/issues/50258
846
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
847
* https://tracker.ceph.com/issues/49962
848
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
849
* https://tracker.ceph.com/issues/49962
850
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
851
* https://tracker.ceph.com/issues/50530
852
    pacific: client: abort after MDS blocklist
853
854 2 Patrick Donnelly
855
h3. 2021 Apr 21 (Integration Branch)
856
857
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
858
859
* https://tracker.ceph.com/issues/45434
860
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
861
* https://tracker.ceph.com/issues/50250
862
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
863
* https://tracker.ceph.com/issues/50258
864
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
865
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
866
    qa: quota failure
867
* https://tracker.ceph.com/issues/50016
868
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
869
* https://tracker.ceph.com/issues/50495
870
    pacific: client: shutdown race fails with status 141
871
872
873 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
874
875
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
876
877
* https://tracker.ceph.com/issues/45434
878
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
879
* https://tracker.ceph.com/issues/48805
880
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
881
* https://tracker.ceph.com/issues/49500
882
    qa: "Assertion `cb_done' failed."
883
* https://tracker.ceph.com/issues/50258 (new)
884
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
885
* https://tracker.ceph.com/issues/49962
886
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
887
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
888
    qa: quota failure
889
* https://tracker.ceph.com/issues/50260
890
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
891
* https://tracker.ceph.com/issues/50016
892
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"