Project

General

Profile

Pacific » History » Version 120

Rishabh Dave, 11/08/2023 05:04 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7 118 Jos Collin
* Sep: Jos Collin
8 98 Venky Shankar
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 119 Rishabh Dave
h3. ADD NEW ENTRY BELOW
16
17 120 Rishabh Dave
h3. 8 Nov 2023
18
fs: https://pulpito.ceph.com/vshankar-2023-11-06_07:50:57-fs-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
19
20
* https://tracker.ceph.com/issues/51964
21
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
22
* https://tracker.ceph.com/issues/62501
23
  pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
24
* https://tracker.ceph.com/issues/52624
25
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
26
27
smoke: https://pulpito.ceph.com/vshankar-2023-11-06_07:53:57-smoke-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
28
29
30 114 Jos Collin
h3. 2023 September 12
31
32 117 Jos Collin
https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
33 115 Jos Collin
34 117 Jos Collin
  * https://tracker.ceph.com/issues/58992
35
    test_acls (tasks.cephfs.test_acls.TestACLs)
36
  * https://tracker.ceph.com/issues/62580
37
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
38 114 Jos Collin
39 111 Patrick Donnelly
h3. 2023 August 31
40
41 113 Patrick Donnelly
https://github.com/ceph/ceph/pull/53189
42
https://github.com/ceph/ceph/pull/53243
43
https://github.com/ceph/ceph/pull/53185
44
https://github.com/ceph/ceph/pull/52744
45
https://github.com/ceph/ceph/pull/51045
46
47 111 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/
48
49 112 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
50
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
51
* https://tracker.ceph.com/issues/52624
52
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
53
* https://tracker.ceph.com/issues/54462
54
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
55
* https://tracker.ceph.com/issues/50222
56
    osd: 5.2s0 deep-scrub : stat mismatch
57
* https://tracker.ceph.com/issues/50250
58
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
59
60
Some spurious infrastructure / valgrind noise during cleanup.
61 111 Patrick Donnelly
62 107 Patrick Donnelly
h3. 2023 August 22
63
64
Pacific v16.2.14 QA
65
66
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
67
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
68
69
* https://tracker.ceph.com/issues/62578
70
    mon: osd pg-upmap-items command causes PG_DEGRADED warnings
71 109 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
72
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
73 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
74
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
75 107 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
76
    test_acls (tasks.cephfs.test_acls.TestACLs)
77 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62579
78
    client: evicted warning because client completes unmount before thrashed MDS comes back
79 110 Patrick Donnelly
* https://tracker.ceph.com/issues/62580
80
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
81 107 Patrick Donnelly
82 106 Patrick Donnelly
h3. 2023 August 16-2
83
84
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
85
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
86
87
* https://tracker.ceph.com/issues/52624
88
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
89
* https://tracker.ceph.com/issues/58992
90
    test_acls (tasks.cephfs.test_acls.TestACLs)
91
* https://tracker.ceph.com/issues/62501
92
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
93
94 105 Patrick Donnelly
h3. 2023 August 16
95
96
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
97
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
98
99
* https://tracker.ceph.com/issues/62499
100
    testing (?): deadlock ffsb task
101
* https://tracker.ceph.com/issues/62501
102
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
103
104 103 Patrick Donnelly
h3. 2023 August 11
105
106
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
107
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
108
109
Some infra noise causes dead job.
110
111
* https://tracker.ceph.com/issues/52624
112
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
113
* https://tracker.ceph.com/issues/58992
114
    test_acls (tasks.cephfs.test_acls.TestACLs)
115 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58340
116 107 Patrick Donnelly
    fsstress.sh failed with errno 124 
117 103 Patrick Donnelly
* https://tracker.ceph.com/issues/48773
118 107 Patrick Donnelly
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
119 103 Patrick Donnelly
* https://tracker.ceph.com/issues/50527
120
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
121
122 100 Venky Shankar
h3. 2023 August 8
123
124
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
125
126
* https://tracker.ceph.com/issues/52624
127
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
128 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
129
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
130 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
131
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
132
* https://tracker.ceph.com/issues/51964
133
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
134
* https://tracker.ceph.com/issues/58992
135
    test_acls (tasks.cephfs.test_acls.TestACLs)
136
* https://tracker.ceph.com/issues/62465
137
    pacific (?): LibCephFS.ShutdownRace segmentation fault
138 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
139 100 Venky Shankar
140 104 Patrick Donnelly
h3. 2023 August 03
141
142
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
143
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
144
145
* https://tracker.ceph.com/issues/52624
146
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
147
* https://tracker.ceph.com/issues/58992
148
    test_acls (tasks.cephfs.test_acls.TestACLs)
149
150 99 Venky Shankar
h3. 2023 July 25
151
152
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
153
154
* https://tracker.ceph.com/issues/52624
155
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
156
* https://tracker.ceph.com/issues/58992
157
    test_acls (tasks.cephfs.test_acls.TestACLs)
158
* https://tracker.ceph.com/issues/50223
159
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
160
* https://tracker.ceph.com/issues/62160
161
    mds: MDS abort because newly corrupt dentry to be committed
162
* https://tracker.ceph.com/issues/61201
163
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
164
165
166 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
167
168
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
169 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
170 94 Kotresh Hiremath Ravishankar
171
* https://tracker.ceph.com/issues/52624
172
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
173 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
174 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
175
* https://tracker.ceph.com/issues/58340
176
  fsstress.sh failed with errno 124 
177 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
178 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
179
* https://tracker.ceph.com/issues/58992
180
  test_acls (tasks.cephfs.test_acls.TestACLs)
181
* https://tracker.ceph.com/issues/54462
182 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
183 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
184 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
185 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
186 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
187
 
188 94 Kotresh Hiremath Ravishankar
189
190 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
191
192
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
193
194
* https://tracker.ceph.com/issues/52624
195
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
196
* https://tracker.ceph.com/issues/51964
197
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
198
* https://tracker.ceph.com/issues/48773
199
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
200
* https://tracker.ceph.com/issues/50223
201
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
202 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
203
  test_acls (tasks.cephfs.test_acls.TestACLs)
204 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
205
  fsstress.sh failed with errno 124
206 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
207
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
208
* https://tracker.ceph.com/issues/55446
209
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
210 91 Kotresh Hiremath Ravishankar
211 90 Rishabh Dave
h3. 2023 May 4
212
213
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
214
215
* https://tracker.ceph.com/issues/59560
216
  qa: RuntimeError: more than one file system available
217
* https://tracker.ceph.com/issues/59626
218
  FSMissing: File system xxxx does not exist in the map
219
* https://tracker.ceph.com/issues/58340
220
  fsstress.sh failed with errno 124
221
* https://tracker.ceph.com/issues/58992
222
  test_acls
223
* https://tracker.ceph.com/issues/48773
224
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
225
* https://tracker.ceph.com/issues/57676
226
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
227
228
229 86 Rishabh Dave
h3. 2023 Apr 13
230
231
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
232
233
https://tracker.ceph.com/issues/52624
234
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
235
* https://tracker.ceph.com/issues/57594
236
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
237
* https://tracker.ceph.com/issues/54108
238
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
239
* https://tracker.ceph.com/issues/58340
240
  fsstress.sh failed with errno 125
241
* https://tracker.ceph.com/issues/54462
242
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
243
* https://tracker.ceph.com/issues/49287     
244
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
245
* https://tracker.ceph.com/issues/58726
246
  test_acls: expected a yum based or a apt based system
247 40 Ramana Raja
248 84 Venky Shankar
h3. 2022 Dec 07
249
250
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
251
252
many transient git.ceph.com related timeouts
253
254
* https://tracker.ceph.com/issues/52624
255
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
256
* https://tracker.ceph.com/issues/50224
257
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
258
* https://tracker.ceph.com/issues/56644
259
    qa: test_rapid_creation fails with "No space left on device"
260
* https://tracker.ceph.com/issues/58221
261
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
262
263 83 Venky Shankar
h3. 2022 Dec 02
264
265
many transient git.ceph.com related timeouts
266
many transient 'Failed to connect to the host via ssh' failures
267
268
* https://tracker.ceph.com/issues/57723
269
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
270
* https://tracker.ceph.com/issues/52624
271
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
272
273 82 Venky Shankar
h3. 2022 Dec 01
274
275
many transient git.ceph.com related timeouts
276
277
* https://tracker.ceph.com/issues/57723
278
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
279
* https://tracker.ceph.com/issues/52624
280
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
281
282 80 Patrick Donnelly
h3. 2022 Nov 18
283
284
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
285
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
286
287
2 ansible dead failures.
288
12 transient git.ceph.com related timeouts
289
290
* https://tracker.ceph.com/issues/57723
291
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
292
* https://tracker.ceph.com/issues/52624
293
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
294
295 78 Milind Changire
h3. 2022 Oct 19
296
297
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
298
299
* https://tracker.ceph.com/issues/57723
300
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
301
* https://tracker.ceph.com/issues/52624
302
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
303
* https://tracker.ceph.com/issues/56644
304
  qa: test_rapid_creation fails with "No space left on device"
305
* https://tracker.ceph.com/issues/54460
306
  snaptest-multiple-capsnaps.sh test failure
307
* https://tracker.ceph.com/issues/57892
308
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
309
310
311
312 76 Venky Shankar
h3. 2022 Oct 06
313
314
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
315
316
* https://tracker.ceph.com/issues/52624
317
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
318
* https://tracker.ceph.com/issues/50223
319
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
320 77 Milind Changire
* https://tracker.ceph.com/issues/56507
321
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
322 76 Venky Shankar
323 75 Venky Shankar
h3. 2022 Sep 27
324
325
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
326
327
* https://tracker.ceph.com/issues/52624
328
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
329
* https://tracker.ceph.com/issues/48773
330
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
331
* https://tracker.ceph.com/issues/50224
332
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
333
* https://tracker.ceph.com/issues/56507
334
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
335
* https://tracker.ceph.com/issues/50223
336
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
337
338 74 Venky Shankar
h3. 2022 Sep 22
339
340
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
341
342
* https://tracker.ceph.com/issues/52624
343
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
344
* https://tracker.ceph.com/issues/51282
345
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
346
* https://tracker.ceph.com/issues/53360
347
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
348
349 73 Venky Shankar
h3. 2022 Sep 19
350
351
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
352
353
* https://tracker.ceph.com/issues/52624
354
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
355
* https://tracker.ceph.com/issues/57594
356
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
357
358 71 Venky Shankar
h3. 2022 Sep 15
359
360
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
361
362
* https://tracker.ceph.com/issues/51282
363
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
364
* https://tracker.ceph.com/issues/52624
365
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
366
* https://tracker.ceph.com/issues/53360
367
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
368 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
369
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
370 71 Venky Shankar
371 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
372
373
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
374
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
375
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
376
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
377
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
378
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
379
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
380
381
382
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
383
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
384
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
385
386
387 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
388
389
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
390
* Most of the failures are passed in re-run. Please check rerun failures below.
391
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
392
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
393
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
394
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
395
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
396
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
397
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
398
399
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
400
* https://tracker.ceph.com/issues/52624
401
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
402
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
403
* https://tracker.ceph.com/issues/57083
404
* https://tracker.ceph.com/issues/53360
405
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
406
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
407
* https://tracker.ceph.com/issues/51183
408
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
409
* https://tracker.ceph.com/issues/56507
410
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
411
412 68 Kotresh Hiremath Ravishankar
413
h3. 2022 AUG 04
414
415
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
416
417
* https://tracker.ceph.com/issues/57087
418
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
419
* https://tracker.ceph.com/issues/52624
420
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
421
* https://tracker.ceph.com/issues/51267
422
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
423
* https://tracker.ceph.com/issues/53360
424
* https://tracker.ceph.com/issues/57083
425
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
426
* https://tracker.ceph.com/issues/56507
427
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
428
429
430 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
431
432
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
433
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
434
435
* https://tracker.ceph.com/issues/52624
436
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
437
* https://tracker.ceph.com/issues/57083
438
* https://tracker.ceph.com/issues/53360
439
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
440
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
441
* https://tracker.ceph.com/issues/51183
442
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
443
* https://tracker.ceph.com/issues/56507
444
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
445
446
447 66 Venky Shankar
h3. 2022 July 08
448
449
* https://tracker.ceph.com/issues/52624
450
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
451
* https://tracker.ceph.com/issues/53360
452
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
453
* https://tracker.ceph.com/issues/51183
454
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
455
* https://tracker.ceph.com/issues/56506
456
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
457
* https://tracker.ceph.com/issues/56507
458
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
459
460 65 Venky Shankar
h3. 2022 Jun 28
461
462
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
463
464
* https://tracker.ceph.com/issues/52624
465
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
466
* https://tracker.ceph.com/issues/53360
467
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
468
* https://tracker.ceph.com/issues/51183
469
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
470
471 64 Venky Shankar
h3. 2022 Jun 22
472
473
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
474
475
* https://tracker.ceph.com/issues/52624
476
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
477
* https://tracker.ceph.com/issues/53360
478
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
479
* https://tracker.ceph.com/issues/51183
480
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
481
482 63 Venky Shankar
h3. 2022 Jun 17
483
484
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
485
486
* https://tracker.ceph.com/issues/52624
487
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
488
* https://tracker.ceph.com/issues/53360
489
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
490
* https://tracker.ceph.com/issues/51183
491
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
492
493 62 Venky Shankar
h3. 2022 Jun 16
494
495
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
496
497
* https://tracker.ceph.com/issues/52624
498
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
499
* https://tracker.ceph.com/issues/53360
500
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
501
* https://tracker.ceph.com/issues/55449
502
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
503
* https://tracker.ceph.com/issues/51267
504
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
505
* https://tracker.ceph.com/issues/55332
506
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
507
508 61 Venky Shankar
h3. 2022 Jun 15
509
510
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
511
512
* https://tracker.ceph.com/issues/52624
513
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
514
* https://tracker.ceph.com/issues/53360
515
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
516
* https://tracker.ceph.com/issues/55449
517
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
518
519
520 60 Venky Shankar
h3. 2022 Jun 10
521
522
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
523
524
* https://tracker.ceph.com/issues/52624
525
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
526
* https://tracker.ceph.com/issues/53360
527
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
528
* https://tracker.ceph.com/issues/55449
529
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
530
531 59 Venky Shankar
h3. 2022 Jun 09
532
533
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
534
535
* https://tracker.ceph.com/issues/52624
536
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
537
* https://tracker.ceph.com/issues/53360
538
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
539
* https://tracker.ceph.com/issues/55449
540
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
541
542 58 Venky Shankar
h3. 2022 May 06
543
544
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
545
546
* https://tracker.ceph.com/issues/52624
547
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
548
* https://tracker.ceph.com/issues/53360
549
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
550
551 57 Venky Shankar
h3. 2022 April 18
552
553
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
554
(only mgr/snap_schedule backport pr)
555
556
* https://tracker.ceph.com/issues/52624
557
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
558 56 Venky Shankar
559
h3. 2022 March 28
560
561
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
562
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
563
564
* https://tracker.ceph.com/issues/52624
565
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
566
* https://tracker.ceph.com/issues/53360
567
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
568
* https://tracker.ceph.com/issues/54411
569
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
570
	33 daemons have recently crashed" during suites/fsstress.sh
571
572 55 Venky Shankar
h3. 2022 March 25
573
574
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
575
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
576
577
* https://tracker.ceph.com/issues/52624
578
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
579
* https://tracker.ceph.com/issues/52606
580
        qa: test_dirfrag_limit
581
* https://tracker.ceph.com/issues/51905
582
        qa: "error reading sessionmap 'mds1_sessionmap'"
583
* https://tracker.ceph.com/issues/53360
584
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
585
* https://tracker.ceph.com/issues/51183
586
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
587 47 Patrick Donnelly
588 54 Venky Shankar
h3. 2022 March 22
589
590
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
591
592
* https://tracker.ceph.com/issues/52624
593
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
594
* https://tracker.ceph.com/issues/52606
595
        qa: test_dirfrag_limit
596
* https://tracker.ceph.com/issues/51183
597
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
598
* https://tracker.ceph.com/issues/51905
599
        qa: "error reading sessionmap 'mds1_sessionmap'"
600
* https://tracker.ceph.com/issues/53360
601
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
602
* https://tracker.ceph.com/issues/54411
603
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
604
	33 daemons have recently crashed" during suites/fsstress.sh
605 47 Patrick Donnelly
606 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
607
608
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
609
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
610
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
611
612
613
* https://tracker.ceph.com/issues/53300
614
	qa: cluster [WRN] Scrub error on inode
615
* https://tracker.ceph.com/issues/53302
616
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
617
* https://tracker.ceph.com/issues/53314
618
	qa: fs/upgrade/mds_upgrade_sequence test timeout
619
* https://tracker.ceph.com/issues/53316
620
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
621
* https://tracker.ceph.com/issues/52624
622
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
623
* https://tracker.ceph.com/issues/52396
624
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
625
* https://tracker.ceph.com/issues/52875
626
	pacific: qa: test_dirfrag_limit
627
* https://tracker.ceph.com/issues/51705
628 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
629
* https://tracker.ceph.com/issues/39634
630
	qa: test_full_same_file timeout
631 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
632
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
633 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
634
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
635 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
636
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
637 48 Kotresh Hiremath Ravishankar
638 47 Patrick Donnelly
639
h3. 2021 November 20
640
641
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
642
643
* https://tracker.ceph.com/issues/53360
644
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
645 46 Kotresh Hiremath Ravishankar
646 41 Patrick Donnelly
h3. 2021 September 14 (QE)
647
648
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
649 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
650 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
651 41 Patrick Donnelly
652
* https://tracker.ceph.com/issues/52606
653
    qa: test_dirfrag_limit
654
* https://tracker.ceph.com/issues/52607
655
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
656 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
657
    qa: tasks.cephfs.fuse_mount:mount command failed
658 41 Patrick Donnelly
659 40 Ramana Raja
h3. 2021 Sep 7
660
661
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
662
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
663
664
665
* https://tracker.ceph.com/issues/52396
666
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
667
* https://tracker.ceph.com/issues/51705
668
    qa: tasks.cephfs.fuse_mount:mount command failed
669 4 Patrick Donnelly
670 34 Ramana Raja
h3. 2021 Aug 30
671
672
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
673
674
* https://tracker.ceph.com/issues/45434
675
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
676
* https://tracker.ceph.com/issues/51705
677
    qa: tasks.cephfs.fuse_mount:mount command failed
678
* https://tracker.ceph.com/issues/52396
679
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
680 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
681 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
682
* https://tracker.ceph.com/issues/51267
683 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
684
* https://tracker.ceph.com/issues/48772
685
   qa: pjd: not ok 9, 44, 80
686 35 Ramana Raja
687 34 Ramana Raja
688 31 Ramana Raja
h3. 2021 Aug 23
689
690
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
691
692
* https://tracker.ceph.com/issues/45434
693
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
694
* https://tracker.ceph.com/issues/51705
695
    qa: tasks.cephfs.fuse_mount:mount command failed
696 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
697 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
698
* https://tracker.ceph.com/issues/52397
699
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
700
701 31 Ramana Raja
702 29 Ramana Raja
h3. 2021 Aug 11
703
704
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
705
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
706
707
* https://tracker.ceph.com/issues/45434
708
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
709
* https://tracker.ceph.com/issues/51705
710
    qa: tasks.cephfs.fuse_mount:mount command failed
711 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
712
    osd: 5.2s0 deep-scrub : stat mismatch
713 29 Ramana Raja
714 19 Jos Collin
h3. 2021 July 15
715 20 Jos Collin
716 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
717
718 26 Jos Collin
* https://tracker.ceph.com/issues/45434
719
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
720
* https://tracker.ceph.com/issues/51705
721
    qa: tasks.cephfs.fuse_mount:mount command failed
722 27 Jos Collin
* https://tracker.ceph.com/issues/51183
723
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
724 28 Jos Collin
* https://tracker.ceph.com/issues/50528
725
    qa: fs:thrash: pjd suite not ok 80
726
* https://tracker.ceph.com/issues/51706
727
    qa: osd deep-scrub stat mismatch
728 26 Jos Collin
729 19 Jos Collin
h3. 2021 July 13
730 20 Jos Collin
731 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
732
733 21 Jos Collin
* https://tracker.ceph.com/issues/51704
734
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
735 23 Jos Collin
* https://tracker.ceph.com/issues/45434
736
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
737 24 Jos Collin
* https://tracker.ceph.com/issues/51705
738 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
739
* https://tracker.ceph.com/issues/48640
740 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
741 21 Jos Collin
742 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
743
744
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
745
746
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
747
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
748
749
* https://tracker.ceph.com/issues/45434
750
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
751
* https://tracker.ceph.com/issues/50260
752
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
753
* https://tracker.ceph.com/issues/51183
754
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
755
756
757 16 Jeff Layton
h3. 2021 June 28
758
759
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
760
761
* https://tracker.ceph.com/issues/45434
762
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
763
* https://tracker.ceph.com/issues/51440
764 17 Jeff Layton
    fallocate fails with EACCES
765 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
766
    TestVolumeClient failure
767
* https://tracker.ceph.com/issues/51266
768
    test cleanup failure
769
* https://tracker.ceph.com/issues/51183
770
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
771
772 12 Jeff Layton
h3. 2021 June 14
773
774
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
775
 
776
* https://tracker.ceph.com/issues/45434
777
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
778
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
779
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
780
* https://tracker.ceph.com/issues/51263
781
    pjdfstest rename test 10.t failed with EACCES
782
* https://tracker.ceph.com/issues/51264
783
    TestVolumeClient failure
784 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
785
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
786 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
787
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
788 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
789
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
790 11 Patrick Donnelly
791
h3. 2021 June 07 (Integration Branch)
792
793
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
794
795
* https://tracker.ceph.com/issues/45434
796
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
797
* https://tracker.ceph.com/issues/50279
798
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
799
* https://tracker.ceph.com/issues/48773
800
    qa: scrub does not complete
801
* https://tracker.ceph.com/issues/51170
802
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
803
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
804
    qa: quota failure
805
806
807 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
808
809
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
810
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
811
812
* https://tracker.ceph.com/issues/45434
813
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
814
* https://tracker.ceph.com/issues/50258
815
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
816
* https://tracker.ceph.com/issues/50260
817
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
818
* https://tracker.ceph.com/issues/49962
819
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
820
* https://tracker.ceph.com/issues/50016
821
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
822
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
823
    qa: quota failure
824
* https://tracker.ceph.com/issues/50528
825
    pacific: qa: fs:thrash: pjd suite not ok 20
826
827
828 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
829
830
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
831
832
* https://tracker.ceph.com/issues/50527
833
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
834
* https://tracker.ceph.com/issues/50528
835
    pacific: qa: fs:thrash: pjd suite not ok 20
836
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
837
    qa: "Assertion `cb_done' failed."
838
* https://tracker.ceph.com/issues/49500
839
    qa: "Assertion `cb_done' failed."
840
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
841
    qa: quota failure
842
* https://tracker.ceph.com/issues/45434
843
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
844
* https://tracker.ceph.com/issues/50279
845
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
846
* https://tracker.ceph.com/issues/50258
847
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
848
* https://tracker.ceph.com/issues/49962
849
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
850
* https://tracker.ceph.com/issues/49962
851
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
852
* https://tracker.ceph.com/issues/50530
853
    pacific: client: abort after MDS blocklist
854
855 2 Patrick Donnelly
856
h3. 2021 Apr 21 (Integration Branch)
857
858
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
859
860
* https://tracker.ceph.com/issues/45434
861
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
862
* https://tracker.ceph.com/issues/50250
863
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
864
* https://tracker.ceph.com/issues/50258
865
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
866
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
867
    qa: quota failure
868
* https://tracker.ceph.com/issues/50016
869
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
870
* https://tracker.ceph.com/issues/50495
871
    pacific: client: shutdown race fails with status 141
872
873
874 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
875
876
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
877
878
* https://tracker.ceph.com/issues/45434
879
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
880
* https://tracker.ceph.com/issues/48805
881
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
882
* https://tracker.ceph.com/issues/49500
883
    qa: "Assertion `cb_done' failed."
884
* https://tracker.ceph.com/issues/50258 (new)
885
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
886
* https://tracker.ceph.com/issues/49962
887
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
888
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
889
    qa: quota failure
890
* https://tracker.ceph.com/issues/50260
891
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
892
* https://tracker.ceph.com/issues/50016
893
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"