Project

General

Profile

Pacific » History » Version 123

Rishabh Dave, 11/22/2023 05:35 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7 118 Jos Collin
* Sep: Jos Collin
8 98 Venky Shankar
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 119 Rishabh Dave
h3. ADD NEW ENTRY BELOW
16
17 123 Rishabh Dave
h3. 22 Nov 2023
18
19
https://pulpito.ceph.com/yuriw-2023-11-14_20:31:57-fs-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/
20
21
* https://tracker.ceph.com/issues/62501
22
  pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
23
* https://tracker.ceph.com/issues/50223
24
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
25
* ior/mdtest failures because packages were missing from download.ceph.com
26
* test_acls failed because known distro wasn't detected
27
* https://tracker.ceph.com/issues/52624
28
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
29
30
31 122 Rishabh Dave
h3. 15 Nov 2023
32
33
https://pulpito.ceph.com/yuriw-2023-10-24_00:02:48-fs-wip-yuri7-testing-2023-10-23-1230-pacific-distro-default-smithi/
34
35
* https://tracker.ceph.com/issues/63539
36
  fs/full/subvolume_clone.sh: Health check failed: 1 full osd(s) (OSD_FULL)
37
* test_acls: distro name for RHEL 8.4 wasn't recognized one by xfstests_dev.py
38
* ior package was missing because it was deleted from download.ceph.com
39
* https://tracker.ceph.com/issues/52624
40
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
41
42 120 Rishabh Dave
h3. 8 Nov 2023
43 121 Rishabh Dave
44 120 Rishabh Dave
fs: https://pulpito.ceph.com/vshankar-2023-11-06_07:50:57-fs-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
45 121 Rishabh Dave
smoke: https://pulpito.ceph.com/vshankar-2023-11-06_07:53:57-smoke-wip-vshankar-testing1-2023-11-02-1726-pacific-testing-default-smithi/
46 120 Rishabh Dave
47
* https://tracker.ceph.com/issues/51964
48
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
49
* https://tracker.ceph.com/issues/62501
50
  pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
51
* https://tracker.ceph.com/issues/52624
52
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
53
54 114 Jos Collin
h3. 2023 September 12
55
56 117 Jos Collin
https://pulpito.ceph.com/yuriw-2023-08-21_21:30:15-fs-wip-yuri2-testing-2023-08-21-0910-pacific-distro-default-smithi
57 115 Jos Collin
58 117 Jos Collin
  * https://tracker.ceph.com/issues/58992
59
    test_acls (tasks.cephfs.test_acls.TestACLs)
60
  * https://tracker.ceph.com/issues/62580
61
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
62 114 Jos Collin
63 111 Patrick Donnelly
h3. 2023 August 31
64
65 113 Patrick Donnelly
https://github.com/ceph/ceph/pull/53189
66
https://github.com/ceph/ceph/pull/53243
67
https://github.com/ceph/ceph/pull/53185
68
https://github.com/ceph/ceph/pull/52744
69
https://github.com/ceph/ceph/pull/51045
70
71 111 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2023-08-31_15:31:51-fs-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/
72
73 112 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
74
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
75
* https://tracker.ceph.com/issues/52624
76
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
77
* https://tracker.ceph.com/issues/54462
78
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
79
* https://tracker.ceph.com/issues/50222
80
    osd: 5.2s0 deep-scrub : stat mismatch
81
* https://tracker.ceph.com/issues/50250
82
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
83
84
Some spurious infrastructure / valgrind noise during cleanup.
85 111 Patrick Donnelly
86 107 Patrick Donnelly
h3. 2023 August 22
87
88
Pacific v16.2.14 QA
89
90
https://pulpito.ceph.com/yuriw-2023-08-22_14:48:56-fs-pacific-release-distro-default-smithi/
91
https://pulpito.ceph.com/yuriw-2023-08-22_23:49:19-fs-pacific-release-distro-default-smithi/
92
93
* https://tracker.ceph.com/issues/62578
94
    mon: osd pg-upmap-items command causes PG_DEGRADED warnings
95 109 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
96
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
97 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62501
98
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh)
99 107 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
100
    test_acls (tasks.cephfs.test_acls.TestACLs)
101 108 Patrick Donnelly
* https://tracker.ceph.com/issues/62579
102
    client: evicted warning because client completes unmount before thrashed MDS comes back
103 110 Patrick Donnelly
* https://tracker.ceph.com/issues/62580
104
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
105 107 Patrick Donnelly
106 106 Patrick Donnelly
h3. 2023 August 16-2
107
108
https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific
109
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific
110
111
* https://tracker.ceph.com/issues/52624
112
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
113
* https://tracker.ceph.com/issues/58992
114
    test_acls (tasks.cephfs.test_acls.TestACLs)
115
* https://tracker.ceph.com/issues/62501
116
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
117
118 105 Patrick Donnelly
h3. 2023 August 16
119
120
https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa
121
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific
122
123
* https://tracker.ceph.com/issues/62499
124
    testing (?): deadlock ffsb task
125
* https://tracker.ceph.com/issues/62501
126
    pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC
127
128 103 Patrick Donnelly
h3. 2023 August 11
129
130
https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific
131
https://pulpito.ceph.com/yuriw-2023-08-15_01:22:18-fs-wip-yuri8-testing-2023-08-11-0834-pacific-distro-default-smithi/
132
133
Some infra noise causes dead job.
134
135
* https://tracker.ceph.com/issues/52624
136
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
137
* https://tracker.ceph.com/issues/58992
138
    test_acls (tasks.cephfs.test_acls.TestACLs)
139 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58340
140 107 Patrick Donnelly
    fsstress.sh failed with errno 124 
141 103 Patrick Donnelly
* https://tracker.ceph.com/issues/48773
142 107 Patrick Donnelly
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
143 103 Patrick Donnelly
* https://tracker.ceph.com/issues/50527
144
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
145
146 100 Venky Shankar
h3. 2023 August 8
147
148
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
149
150
* https://tracker.ceph.com/issues/52624
151
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
152 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
153
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
154 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
155
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
156
* https://tracker.ceph.com/issues/51964
157
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
158
* https://tracker.ceph.com/issues/58992
159
    test_acls (tasks.cephfs.test_acls.TestACLs)
160
* https://tracker.ceph.com/issues/62465
161
    pacific (?): LibCephFS.ShutdownRace segmentation fault
162 102 Patrick Donnelly
* "fs/full/subvolume_snapshot_rm.sh" failure caused by bluestore crash.
163 100 Venky Shankar
164 104 Patrick Donnelly
h3. 2023 August 03
165
166
https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p
167
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-08-03-0807-pacific
168
169
* https://tracker.ceph.com/issues/52624
170
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
171
* https://tracker.ceph.com/issues/58992
172
    test_acls (tasks.cephfs.test_acls.TestACLs)
173
174 99 Venky Shankar
h3. 2023 July 25
175
176
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
177
178
* https://tracker.ceph.com/issues/52624
179
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
180
* https://tracker.ceph.com/issues/58992
181
    test_acls (tasks.cephfs.test_acls.TestACLs)
182
* https://tracker.ceph.com/issues/50223
183
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
184
* https://tracker.ceph.com/issues/62160
185
    mds: MDS abort because newly corrupt dentry to be committed
186
* https://tracker.ceph.com/issues/61201
187
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
188
189
190 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
191
192
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
193 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
194 94 Kotresh Hiremath Ravishankar
195
* https://tracker.ceph.com/issues/52624
196
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
197 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
198 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
199
* https://tracker.ceph.com/issues/58340
200
  fsstress.sh failed with errno 124 
201 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
202 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
203
* https://tracker.ceph.com/issues/58992
204
  test_acls (tasks.cephfs.test_acls.TestACLs)
205
* https://tracker.ceph.com/issues/54462
206 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
207 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
208 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
209 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
210 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
211
 
212 94 Kotresh Hiremath Ravishankar
213
214 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
215
216
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
217
218
* https://tracker.ceph.com/issues/52624
219
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
220
* https://tracker.ceph.com/issues/51964
221
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
222
* https://tracker.ceph.com/issues/48773
223
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
224
* https://tracker.ceph.com/issues/50223
225
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
226 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
227
  test_acls (tasks.cephfs.test_acls.TestACLs)
228 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
229
  fsstress.sh failed with errno 124
230 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
231
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
232
* https://tracker.ceph.com/issues/55446
233
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
234 91 Kotresh Hiremath Ravishankar
235 90 Rishabh Dave
h3. 2023 May 4
236
237
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
238
239
* https://tracker.ceph.com/issues/59560
240
  qa: RuntimeError: more than one file system available
241
* https://tracker.ceph.com/issues/59626
242
  FSMissing: File system xxxx does not exist in the map
243
* https://tracker.ceph.com/issues/58340
244
  fsstress.sh failed with errno 124
245
* https://tracker.ceph.com/issues/58992
246
  test_acls
247
* https://tracker.ceph.com/issues/48773
248
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
249
* https://tracker.ceph.com/issues/57676
250
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
251
252
253 86 Rishabh Dave
h3. 2023 Apr 13
254
255
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
256
257
https://tracker.ceph.com/issues/52624
258
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
259
* https://tracker.ceph.com/issues/57594
260
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
261
* https://tracker.ceph.com/issues/54108
262
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
263
* https://tracker.ceph.com/issues/58340
264
  fsstress.sh failed with errno 125
265
* https://tracker.ceph.com/issues/54462
266
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
267
* https://tracker.ceph.com/issues/49287     
268
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
269
* https://tracker.ceph.com/issues/58726
270
  test_acls: expected a yum based or a apt based system
271 40 Ramana Raja
272 84 Venky Shankar
h3. 2022 Dec 07
273
274
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
275
276
many transient git.ceph.com related timeouts
277
278
* https://tracker.ceph.com/issues/52624
279
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
280
* https://tracker.ceph.com/issues/50224
281
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
282
* https://tracker.ceph.com/issues/56644
283
    qa: test_rapid_creation fails with "No space left on device"
284
* https://tracker.ceph.com/issues/58221
285
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
286
287 83 Venky Shankar
h3. 2022 Dec 02
288
289
many transient git.ceph.com related timeouts
290
many transient 'Failed to connect to the host via ssh' failures
291
292
* https://tracker.ceph.com/issues/57723
293
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
294
* https://tracker.ceph.com/issues/52624
295
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
296
297 82 Venky Shankar
h3. 2022 Dec 01
298
299
many transient git.ceph.com related timeouts
300
301
* https://tracker.ceph.com/issues/57723
302
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
303
* https://tracker.ceph.com/issues/52624
304
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
305
306 80 Patrick Donnelly
h3. 2022 Nov 18
307
308
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
309
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
310
311
2 ansible dead failures.
312
12 transient git.ceph.com related timeouts
313
314
* https://tracker.ceph.com/issues/57723
315
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
316
* https://tracker.ceph.com/issues/52624
317
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
318
319 78 Milind Changire
h3. 2022 Oct 19
320
321
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
322
323
* https://tracker.ceph.com/issues/57723
324
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
325
* https://tracker.ceph.com/issues/52624
326
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
327
* https://tracker.ceph.com/issues/56644
328
  qa: test_rapid_creation fails with "No space left on device"
329
* https://tracker.ceph.com/issues/54460
330
  snaptest-multiple-capsnaps.sh test failure
331
* https://tracker.ceph.com/issues/57892
332
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
333
334
335
336 76 Venky Shankar
h3. 2022 Oct 06
337
338
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
339
340
* https://tracker.ceph.com/issues/52624
341
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
342
* https://tracker.ceph.com/issues/50223
343
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
344 77 Milind Changire
* https://tracker.ceph.com/issues/56507
345
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
346 76 Venky Shankar
347 75 Venky Shankar
h3. 2022 Sep 27
348
349
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
350
351
* https://tracker.ceph.com/issues/52624
352
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
353
* https://tracker.ceph.com/issues/48773
354
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
355
* https://tracker.ceph.com/issues/50224
356
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
357
* https://tracker.ceph.com/issues/56507
358
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
359
* https://tracker.ceph.com/issues/50223
360
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
361
362 74 Venky Shankar
h3. 2022 Sep 22
363
364
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
365
366
* https://tracker.ceph.com/issues/52624
367
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
368
* https://tracker.ceph.com/issues/51282
369
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
370
* https://tracker.ceph.com/issues/53360
371
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
372
373 73 Venky Shankar
h3. 2022 Sep 19
374
375
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
376
377
* https://tracker.ceph.com/issues/52624
378
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
379
* https://tracker.ceph.com/issues/57594
380
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
381
382 71 Venky Shankar
h3. 2022 Sep 15
383
384
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
385
386
* https://tracker.ceph.com/issues/51282
387
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
388
* https://tracker.ceph.com/issues/52624
389
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
390
* https://tracker.ceph.com/issues/53360
391
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
392 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
393
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
394 71 Venky Shankar
395 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
396
397
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
398
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
399
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
400
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
401
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
402
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
403
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
404
405
406
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
407
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
408
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
409
410
411 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
412
413
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
414
* Most of the failures are passed in re-run. Please check rerun failures below.
415
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
416
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
417
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
418
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
419
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
420
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
421
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
422
423
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
424
* https://tracker.ceph.com/issues/52624
425
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
426
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
427
* https://tracker.ceph.com/issues/57083
428
* https://tracker.ceph.com/issues/53360
429
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
430
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
431
* https://tracker.ceph.com/issues/51183
432
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
433
* https://tracker.ceph.com/issues/56507
434
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
435
436 68 Kotresh Hiremath Ravishankar
437
h3. 2022 AUG 04
438
439
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
440
441
* https://tracker.ceph.com/issues/57087
442
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
443
* https://tracker.ceph.com/issues/52624
444
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
445
* https://tracker.ceph.com/issues/51267
446
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
447
* https://tracker.ceph.com/issues/53360
448
* https://tracker.ceph.com/issues/57083
449
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
450
* https://tracker.ceph.com/issues/56507
451
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
452
453
454 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
455
456
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
457
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
458
459
* https://tracker.ceph.com/issues/52624
460
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
461
* https://tracker.ceph.com/issues/57083
462
* https://tracker.ceph.com/issues/53360
463
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
464
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
465
* https://tracker.ceph.com/issues/51183
466
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
467
* https://tracker.ceph.com/issues/56507
468
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
469
470
471 66 Venky Shankar
h3. 2022 July 08
472
473
* https://tracker.ceph.com/issues/52624
474
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
475
* https://tracker.ceph.com/issues/53360
476
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
477
* https://tracker.ceph.com/issues/51183
478
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
479
* https://tracker.ceph.com/issues/56506
480
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
481
* https://tracker.ceph.com/issues/56507
482
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
483
484 65 Venky Shankar
h3. 2022 Jun 28
485
486
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
487
488
* https://tracker.ceph.com/issues/52624
489
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
490
* https://tracker.ceph.com/issues/53360
491
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
492
* https://tracker.ceph.com/issues/51183
493
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
494
495 64 Venky Shankar
h3. 2022 Jun 22
496
497
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
498
499
* https://tracker.ceph.com/issues/52624
500
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
501
* https://tracker.ceph.com/issues/53360
502
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
503
* https://tracker.ceph.com/issues/51183
504
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
505
506 63 Venky Shankar
h3. 2022 Jun 17
507
508
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
509
510
* https://tracker.ceph.com/issues/52624
511
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
512
* https://tracker.ceph.com/issues/53360
513
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
514
* https://tracker.ceph.com/issues/51183
515
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
516
517 62 Venky Shankar
h3. 2022 Jun 16
518
519
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
520
521
* https://tracker.ceph.com/issues/52624
522
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
523
* https://tracker.ceph.com/issues/53360
524
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
525
* https://tracker.ceph.com/issues/55449
526
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
527
* https://tracker.ceph.com/issues/51267
528
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
529
* https://tracker.ceph.com/issues/55332
530
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
531
532 61 Venky Shankar
h3. 2022 Jun 15
533
534
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
535
536
* https://tracker.ceph.com/issues/52624
537
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
538
* https://tracker.ceph.com/issues/53360
539
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
540
* https://tracker.ceph.com/issues/55449
541
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
542
543
544 60 Venky Shankar
h3. 2022 Jun 10
545
546
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
547
548
* https://tracker.ceph.com/issues/52624
549
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
550
* https://tracker.ceph.com/issues/53360
551
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
552
* https://tracker.ceph.com/issues/55449
553
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
554
555 59 Venky Shankar
h3. 2022 Jun 09
556
557
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
558
559
* https://tracker.ceph.com/issues/52624
560
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
561
* https://tracker.ceph.com/issues/53360
562
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
563
* https://tracker.ceph.com/issues/55449
564
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
565
566 58 Venky Shankar
h3. 2022 May 06
567
568
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
569
570
* https://tracker.ceph.com/issues/52624
571
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
572
* https://tracker.ceph.com/issues/53360
573
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
574
575 57 Venky Shankar
h3. 2022 April 18
576
577
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
578
(only mgr/snap_schedule backport pr)
579
580
* https://tracker.ceph.com/issues/52624
581
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
582 56 Venky Shankar
583
h3. 2022 March 28
584
585
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
586
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
587
588
* https://tracker.ceph.com/issues/52624
589
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
590
* https://tracker.ceph.com/issues/53360
591
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
592
* https://tracker.ceph.com/issues/54411
593
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
594
	33 daemons have recently crashed" during suites/fsstress.sh
595
596 55 Venky Shankar
h3. 2022 March 25
597
598
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
599
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
600
601
* https://tracker.ceph.com/issues/52624
602
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
603
* https://tracker.ceph.com/issues/52606
604
        qa: test_dirfrag_limit
605
* https://tracker.ceph.com/issues/51905
606
        qa: "error reading sessionmap 'mds1_sessionmap'"
607
* https://tracker.ceph.com/issues/53360
608
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
609
* https://tracker.ceph.com/issues/51183
610
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
611 47 Patrick Donnelly
612 54 Venky Shankar
h3. 2022 March 22
613
614
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
615
616
* https://tracker.ceph.com/issues/52624
617
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
618
* https://tracker.ceph.com/issues/52606
619
        qa: test_dirfrag_limit
620
* https://tracker.ceph.com/issues/51183
621
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
622
* https://tracker.ceph.com/issues/51905
623
        qa: "error reading sessionmap 'mds1_sessionmap'"
624
* https://tracker.ceph.com/issues/53360
625
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
626
* https://tracker.ceph.com/issues/54411
627
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
628
	33 daemons have recently crashed" during suites/fsstress.sh
629 47 Patrick Donnelly
630 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
631
632
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
633
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
634
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
635
636
637
* https://tracker.ceph.com/issues/53300
638
	qa: cluster [WRN] Scrub error on inode
639
* https://tracker.ceph.com/issues/53302
640
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
641
* https://tracker.ceph.com/issues/53314
642
	qa: fs/upgrade/mds_upgrade_sequence test timeout
643
* https://tracker.ceph.com/issues/53316
644
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
645
* https://tracker.ceph.com/issues/52624
646
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
647
* https://tracker.ceph.com/issues/52396
648
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
649
* https://tracker.ceph.com/issues/52875
650
	pacific: qa: test_dirfrag_limit
651
* https://tracker.ceph.com/issues/51705
652 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
653
* https://tracker.ceph.com/issues/39634
654
	qa: test_full_same_file timeout
655 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
656
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
657 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
658
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
659 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
660
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
661 48 Kotresh Hiremath Ravishankar
662 47 Patrick Donnelly
663
h3. 2021 November 20
664
665
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
666
667
* https://tracker.ceph.com/issues/53360
668
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
669 46 Kotresh Hiremath Ravishankar
670 41 Patrick Donnelly
h3. 2021 September 14 (QE)
671
672
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
673 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
674 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
675 41 Patrick Donnelly
676
* https://tracker.ceph.com/issues/52606
677
    qa: test_dirfrag_limit
678
* https://tracker.ceph.com/issues/52607
679
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
680 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
681
    qa: tasks.cephfs.fuse_mount:mount command failed
682 41 Patrick Donnelly
683 40 Ramana Raja
h3. 2021 Sep 7
684
685
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
686
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
687
688
689
* https://tracker.ceph.com/issues/52396
690
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
691
* https://tracker.ceph.com/issues/51705
692
    qa: tasks.cephfs.fuse_mount:mount command failed
693 4 Patrick Donnelly
694 34 Ramana Raja
h3. 2021 Aug 30
695
696
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
697
698
* https://tracker.ceph.com/issues/45434
699
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
700
* https://tracker.ceph.com/issues/51705
701
    qa: tasks.cephfs.fuse_mount:mount command failed
702
* https://tracker.ceph.com/issues/52396
703
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
704 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
705 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
706
* https://tracker.ceph.com/issues/51267
707 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
708
* https://tracker.ceph.com/issues/48772
709
   qa: pjd: not ok 9, 44, 80
710 35 Ramana Raja
711 34 Ramana Raja
712 31 Ramana Raja
h3. 2021 Aug 23
713
714
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
715
716
* https://tracker.ceph.com/issues/45434
717
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
718
* https://tracker.ceph.com/issues/51705
719
    qa: tasks.cephfs.fuse_mount:mount command failed
720 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
721 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
722
* https://tracker.ceph.com/issues/52397
723
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
724
725 31 Ramana Raja
726 29 Ramana Raja
h3. 2021 Aug 11
727
728
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
729
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
730
731
* https://tracker.ceph.com/issues/45434
732
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
733
* https://tracker.ceph.com/issues/51705
734
    qa: tasks.cephfs.fuse_mount:mount command failed
735 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
736
    osd: 5.2s0 deep-scrub : stat mismatch
737 29 Ramana Raja
738 19 Jos Collin
h3. 2021 July 15
739 20 Jos Collin
740 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
741
742 26 Jos Collin
* https://tracker.ceph.com/issues/45434
743
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
744
* https://tracker.ceph.com/issues/51705
745
    qa: tasks.cephfs.fuse_mount:mount command failed
746 27 Jos Collin
* https://tracker.ceph.com/issues/51183
747
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
748 28 Jos Collin
* https://tracker.ceph.com/issues/50528
749
    qa: fs:thrash: pjd suite not ok 80
750
* https://tracker.ceph.com/issues/51706
751
    qa: osd deep-scrub stat mismatch
752 26 Jos Collin
753 19 Jos Collin
h3. 2021 July 13
754 20 Jos Collin
755 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
756
757 21 Jos Collin
* https://tracker.ceph.com/issues/51704
758
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
759 23 Jos Collin
* https://tracker.ceph.com/issues/45434
760
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
761 24 Jos Collin
* https://tracker.ceph.com/issues/51705
762 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
763
* https://tracker.ceph.com/issues/48640
764 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
765 21 Jos Collin
766 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
767
768
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
769
770
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
771
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
772
773
* https://tracker.ceph.com/issues/45434
774
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
775
* https://tracker.ceph.com/issues/50260
776
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
777
* https://tracker.ceph.com/issues/51183
778
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
779
780
781 16 Jeff Layton
h3. 2021 June 28
782
783
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
784
785
* https://tracker.ceph.com/issues/45434
786
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
787
* https://tracker.ceph.com/issues/51440
788 17 Jeff Layton
    fallocate fails with EACCES
789 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
790
    TestVolumeClient failure
791
* https://tracker.ceph.com/issues/51266
792
    test cleanup failure
793
* https://tracker.ceph.com/issues/51183
794
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
795
796 12 Jeff Layton
h3. 2021 June 14
797
798
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
799
 
800
* https://tracker.ceph.com/issues/45434
801
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
802
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
803
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
804
* https://tracker.ceph.com/issues/51263
805
    pjdfstest rename test 10.t failed with EACCES
806
* https://tracker.ceph.com/issues/51264
807
    TestVolumeClient failure
808 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
809
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
810 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
811
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
812 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
813
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
814 11 Patrick Donnelly
815
h3. 2021 June 07 (Integration Branch)
816
817
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
818
819
* https://tracker.ceph.com/issues/45434
820
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
821
* https://tracker.ceph.com/issues/50279
822
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
823
* https://tracker.ceph.com/issues/48773
824
    qa: scrub does not complete
825
* https://tracker.ceph.com/issues/51170
826
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
827
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
828
    qa: quota failure
829
830
831 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
832
833
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
834
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
835
836
* https://tracker.ceph.com/issues/45434
837
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
838
* https://tracker.ceph.com/issues/50258
839
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
840
* https://tracker.ceph.com/issues/50260
841
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
842
* https://tracker.ceph.com/issues/49962
843
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
844
* https://tracker.ceph.com/issues/50016
845
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
846
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
847
    qa: quota failure
848
* https://tracker.ceph.com/issues/50528
849
    pacific: qa: fs:thrash: pjd suite not ok 20
850
851
852 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
853
854
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
855
856
* https://tracker.ceph.com/issues/50527
857
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
858
* https://tracker.ceph.com/issues/50528
859
    pacific: qa: fs:thrash: pjd suite not ok 20
860
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
861
    qa: "Assertion `cb_done' failed."
862
* https://tracker.ceph.com/issues/49500
863
    qa: "Assertion `cb_done' failed."
864
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
865
    qa: quota failure
866
* https://tracker.ceph.com/issues/45434
867
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
868
* https://tracker.ceph.com/issues/50279
869
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
870
* https://tracker.ceph.com/issues/50258
871
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
872
* https://tracker.ceph.com/issues/49962
873
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
874
* https://tracker.ceph.com/issues/49962
875
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
876
* https://tracker.ceph.com/issues/50530
877
    pacific: client: abort after MDS blocklist
878
879 2 Patrick Donnelly
880
h3. 2021 Apr 21 (Integration Branch)
881
882
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
883
884
* https://tracker.ceph.com/issues/45434
885
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
886
* https://tracker.ceph.com/issues/50250
887
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
888
* https://tracker.ceph.com/issues/50258
889
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
890
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
891
    qa: quota failure
892
* https://tracker.ceph.com/issues/50016
893
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
894
* https://tracker.ceph.com/issues/50495
895
    pacific: client: shutdown race fails with status 141
896
897
898 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
899
900
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
901
902
* https://tracker.ceph.com/issues/45434
903
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
904
* https://tracker.ceph.com/issues/48805
905
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
906
* https://tracker.ceph.com/issues/49500
907
    qa: "Assertion `cb_done' failed."
908
* https://tracker.ceph.com/issues/50258 (new)
909
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
910
* https://tracker.ceph.com/issues/49962
911
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
912
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
913
    qa: quota failure
914
* https://tracker.ceph.com/issues/50260
915
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
916
* https://tracker.ceph.com/issues/50016
917
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"