Project

General

Profile

Pacific » History » Version 101

Patrick Donnelly, 08/16/2023 03:58 PM

1 1 Patrick Donnelly
h1. Pacific
2
3 4 Patrick Donnelly
h2. On-call Schedule
4
5 98 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7
* Sep: Jos
8
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 81 Venky Shankar
13 1 Patrick Donnelly
h2. Reviews
14 85 Rishabh Dave
15 100 Venky Shankar
h3. 2023 August 8
16
17
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-08-01-0753-pacific
18
19
* https://tracker.ceph.com/issues/52624
20
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
21 1 Patrick Donnelly
* https://tracker.ceph.com/issues/50223
22
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
23 101 Patrick Donnelly
* https://tracker.ceph.com/issues/62164
24
    qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..."
25
* https://tracker.ceph.com/issues/51964
26
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
27
* https://tracker.ceph.com/issues/58992
28
    test_acls (tasks.cephfs.test_acls.TestACLs)
29
* https://tracker.ceph.com/issues/62465
30
    pacific (?): LibCephFS.ShutdownRace segmentation fault
31 100 Venky Shankar
32 99 Venky Shankar
h3. 2023 July 25
33
34
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-19-1340-pacific
35
36
* https://tracker.ceph.com/issues/52624
37
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
38
* https://tracker.ceph.com/issues/58992
39
    test_acls (tasks.cephfs.test_acls.TestACLs)
40
* https://tracker.ceph.com/issues/50223
41
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
42
* https://tracker.ceph.com/issues/62160
43
    mds: MDS abort because newly corrupt dentry to be committed
44
* https://tracker.ceph.com/issues/61201
45
    qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific (recovery thread crash)
46
47
48 94 Kotresh Hiremath Ravishankar
h3. 2023 May 17
49
50
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/
51 97 Kotresh Hiremath Ravishankar
https://pulpito.ceph.com/yuriw-2023-05-17_14:20:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-default-smithi/ (fresh pkg and re-run because of package/installation issues)
52 94 Kotresh Hiremath Ravishankar
53
* https://tracker.ceph.com/issues/52624
54
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
55 96 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61201 (NEW)
56 94 Kotresh Hiremath Ravishankar
  Test failure: test_rebuild_moved_file (tasks.cephfs.test_data_scan.TestDataScan)
57
* https://tracker.ceph.com/issues/58340
58
  fsstress.sh failed with errno 124 
59 1 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
60 94 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
61
* https://tracker.ceph.com/issues/58992
62
  test_acls (tasks.cephfs.test_acls.TestACLs)
63
* https://tracker.ceph.com/issues/54462
64 95 Kotresh Hiremath Ravishankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 
65 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58674
66 95 Kotresh Hiremath Ravishankar
  teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
67 94 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/55446
68 95 Kotresh Hiremath Ravishankar
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
69
 
70 94 Kotresh Hiremath Ravishankar
71
72 91 Kotresh Hiremath Ravishankar
h3. 2023 May 11
73
74
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi
75
76
* https://tracker.ceph.com/issues/52624
77
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
78
* https://tracker.ceph.com/issues/51964
79
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
80
* https://tracker.ceph.com/issues/48773
81
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
82
* https://tracker.ceph.com/issues/50223
83
  qa: "client.4737 isn't responding to mclientcaps(revoke)"
84 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58992
85
  test_acls (tasks.cephfs.test_acls.TestACLs)
86 91 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
87
  fsstress.sh failed with errno 124
88 93 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
89
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
90
* https://tracker.ceph.com/issues/55446
91
  fs/upgrade/mds_upgrade_sequence - hit max job timeout
92 91 Kotresh Hiremath Ravishankar
93 90 Rishabh Dave
h3. 2023 May 4
94
95
https://pulpito.ceph.com/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/
96
97
* https://tracker.ceph.com/issues/59560
98
  qa: RuntimeError: more than one file system available
99
* https://tracker.ceph.com/issues/59626
100
  FSMissing: File system xxxx does not exist in the map
101
* https://tracker.ceph.com/issues/58340
102
  fsstress.sh failed with errno 124
103
* https://tracker.ceph.com/issues/58992
104
  test_acls
105
* https://tracker.ceph.com/issues/48773
106
  Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
107
* https://tracker.ceph.com/issues/57676
108
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
109
110
111 86 Rishabh Dave
h3. 2023 Apr 13
112
113
https://pulpito.ceph.com/yuriw-2023-04-04_15:06:57-fs-wip-yuri8-testing-2023-03-31-0812-pacific-distro-default-smithi/
114
115
https://tracker.ceph.com/issues/52624
116
cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
117
* https://tracker.ceph.com/issues/57594
118
  Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
119
* https://tracker.ceph.com/issues/54108
120
  qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" 
121
* https://tracker.ceph.com/issues/58340
122
  fsstress.sh failed with errno 125
123
* https://tracker.ceph.com/issues/54462
124
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
125
* https://tracker.ceph.com/issues/49287     
126
  cephadm: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
127
* https://tracker.ceph.com/issues/58726
128
  test_acls: expected a yum based or a apt based system
129 40 Ramana Raja
130 84 Venky Shankar
h3. 2022 Dec 07
131
132
https://pulpito.ceph.com/yuriw-2022-12-07_15:45:30-fs-wip-yuri4-testing-2022-12-05-0921-pacific-distro-default-smithi/
133
134
many transient git.ceph.com related timeouts
135
136
* https://tracker.ceph.com/issues/52624
137
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
138
* https://tracker.ceph.com/issues/50224
139
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
140
* https://tracker.ceph.com/issues/56644
141
    qa: test_rapid_creation fails with "No space left on device"
142
* https://tracker.ceph.com/issues/58221
143
    pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
144
145 83 Venky Shankar
h3. 2022 Dec 02
146
147
many transient git.ceph.com related timeouts
148
many transient 'Failed to connect to the host via ssh' failures
149
150
* https://tracker.ceph.com/issues/57723
151
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
152
* https://tracker.ceph.com/issues/52624
153
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
154
155 82 Venky Shankar
h3. 2022 Dec 01
156
157
many transient git.ceph.com related timeouts
158
159
* https://tracker.ceph.com/issues/57723
160
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
161
* https://tracker.ceph.com/issues/52624
162
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
163
164 80 Patrick Donnelly
h3. 2022 Nov 18
165
166
https://trello.com/c/ecysAxl6/1656-wip-yuri-testing-2022-11-18-1500-pacific-old-wip-yuri-testing-2022-10-23-0729-pacific-old-wip-yuri-testing-2022-10-21-0927-pacif
167
https://pulpito.ceph.com/yuriw-2022-11-28_16:28:56-fs-wip-yuri-testing-2022-11-18-1500-pacific-distro-default-smithi/
168
169
2 ansible dead failures.
170
12 transient git.ceph.com related timeouts
171
172
* https://tracker.ceph.com/issues/57723
173
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
174
* https://tracker.ceph.com/issues/52624
175
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
176
177 78 Milind Changire
h3. 2022 Oct 19
178
179
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-10_17:18:40-fs-wip-yuri5-testing-2022-10-10-0837-pacific-distro-default-smithi/
180
181
* https://tracker.ceph.com/issues/57723
182
  pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
183
* https://tracker.ceph.com/issues/52624
184
  cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
185
* https://tracker.ceph.com/issues/56644
186
  qa: test_rapid_creation fails with "No space left on device"
187
* https://tracker.ceph.com/issues/54460
188
  snaptest-multiple-capsnaps.sh test failure
189
* https://tracker.ceph.com/issues/57892
190
  sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase
191
192
193
194 76 Venky Shankar
h3. 2022 Oct 06
195
196
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-21_13:07:25-rados-wip-yuri5-testing-2022-09-20-1347-pacific-distro-default-smithi/
197
198
* https://tracker.ceph.com/issues/52624
199
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
200
* https://tracker.ceph.com/issues/50223
201
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
202 77 Milind Changire
* https://tracker.ceph.com/issues/56507
203
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
204 76 Venky Shankar
205 75 Venky Shankar
h3. 2022 Sep 27
206
207
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-22_22:35:04-fs-wip-yuri2-testing-2022-09-22-1400-pacific-distro-default-smithi/
208
209
* https://tracker.ceph.com/issues/52624
210
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
211
* https://tracker.ceph.com/issues/48773
212
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
213
* https://tracker.ceph.com/issues/50224
214
    test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
215
* https://tracker.ceph.com/issues/56507
216
    Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
217
* https://tracker.ceph.com/issues/50223
218
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
219
220 74 Venky Shankar
h3. 2022 Sep 22
221
222
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific
223
224
* https://tracker.ceph.com/issues/52624
225
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
226
* https://tracker.ceph.com/issues/51282
227
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
228
* https://tracker.ceph.com/issues/53360
229
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
230
231 73 Venky Shankar
h3. 2022 Sep 19
232
233
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-default-smithi/
234
235
* https://tracker.ceph.com/issues/52624
236
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
237
* https://tracker.ceph.com/issues/57594
238
    pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
239
240 71 Venky Shankar
h3. 2022 Sep 15
241
242
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-13_14:22:44-fs-wip-yuri5-testing-2022-09-09-1109-pacific-distro-default-smithi/
243
244
* https://tracker.ceph.com/issues/51282
245
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
246
* https://tracker.ceph.com/issues/52624
247
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
248
* https://tracker.ceph.com/issues/53360
249
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
250 72 Venky Shankar
* https://tracker.ceph.com/issues/48773
251
    Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
252 71 Venky Shankar
253 70 Kotresh Hiremath Ravishankar
h3. 2022 AUG 18
254
255
https://pulpito.ceph.com/yuriw-2022-08-18_23:16:33-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
256
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
257
* https://tracker.ceph.com/issues/51267 - tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status
258
* https://tracker.ceph.com/issues/48773 - Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi103 with status 23: 'mkdir -p...' - qa: scrub does not complete
259
* https://tracker.ceph.com/issues/51964 - qa: test_cephfs_mirror_restart_sync_on_blocklist failure
260
* https://tracker.ceph.com/issues/57218 - qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
261
* https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
262
263
264
Re-run1: https://pulpito.ceph.com/yuriw-2022-08-19_21:01:11-fs-wip-yuri10-testing-2022-08-18-1400-pacific-distro-default-smithi/
265
* https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
266
* https://tracker.ceph.com/issues/57219 - mds crashed while running workunit test fs/misc/dirfrag.sh
267
268
269 69 Kotresh Hiremath Ravishankar
h3. 2022 AUG 11
270
271
https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
272
* Most of the failures are passed in re-run. Please check rerun failures below.
273
  - https://tracker.ceph.com/issues/57147 - test_full_fsync (tasks.cephfs.test_full.TestClusterFull) - mds stuck in up:creating state - osd crash
274
  - https://tracker.ceph.com/issues/52624 - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
275
  - https://tracker.ceph.com/issues/51282 - cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
276
  - https://tracker.ceph.com/issues/50224 - test_mirroring_init_failure_with_recovery - (tasks.cephfs.test_mirroring.TestMirroring)
277
  - https://tracker.ceph.com/issues/57083 - https://tracker.ceph.com/issues/53360 - tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 
278
  - https://tracker.ceph.com/issues/51183 - Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
279
  - https://tracker.ceph.com/issues/56507 - Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)   
280
281
Re-run-1: https://pulpito.ceph.com/yuriw-2022-08-15_13:45:54-fs-wip-yuri3-testing-2022-08-11-0809-pacific-distro-default-smithi/
282
* https://tracker.ceph.com/issues/52624
283
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
284
* tasks/mirror - Command failed on smithi078 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 ..." - Asked for re-run.
285
* https://tracker.ceph.com/issues/57083
286
* https://tracker.ceph.com/issues/53360
287
  tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
288
  client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
289
* https://tracker.ceph.com/issues/51183
290
  Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
291
* https://tracker.ceph.com/issues/56507
292
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
293
294 68 Kotresh Hiremath Ravishankar
295
h3. 2022 AUG 04
296
297
https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi/
298
299
* https://tracker.ceph.com/issues/57087
300
  test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
301
* https://tracker.ceph.com/issues/52624
302
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
303
* https://tracker.ceph.com/issues/51267
304
  tasks/workunit/snaps failure - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status      
305
* https://tracker.ceph.com/issues/53360
306
* https://tracker.ceph.com/issues/57083
307
  qa/import-legacy tasks.cephfs.fuse_mount:mount command failed - :ModuleNotFoundError: No module named 'ceph_volume_client'
308
* https://tracker.ceph.com/issues/56507
309
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
310
311
312 67 Kotresh Hiremath Ravishankar
h3. 2022 July 15
313
314
https://pulpito.ceph.com/yuriw-2022-07-21_22:57:30-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
315
Re-run: https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi
316
317
* https://tracker.ceph.com/issues/52624
318
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
319
* https://tracker.ceph.com/issues/57083
320
* https://tracker.ceph.com/issues/53360
321
        tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} - tasks.cephfs.fuse_mount:mount command failed
322
        client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]" 
323
* https://tracker.ceph.com/issues/51183
324
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
325
* https://tracker.ceph.com/issues/56507
326
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
327
328
329 66 Venky Shankar
h3. 2022 July 08
330
331
* https://tracker.ceph.com/issues/52624
332
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
333
* https://tracker.ceph.com/issues/53360
334
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
335
* https://tracker.ceph.com/issues/51183
336
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
337
* https://tracker.ceph.com/issues/56506
338
        pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan)
339
* https://tracker.ceph.com/issues/56507
340
        pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
341
342 65 Venky Shankar
h3. 2022 Jun 28
343
344
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific
345
346
* https://tracker.ceph.com/issues/52624
347
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
348
* https://tracker.ceph.com/issues/53360
349
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
350
* https://tracker.ceph.com/issues/51183
351
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
352
353 64 Venky Shankar
h3. 2022 Jun 22
354
355
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-06-21-0704-pacific
356
357
* https://tracker.ceph.com/issues/52624
358
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
359
* https://tracker.ceph.com/issues/53360
360
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
361
* https://tracker.ceph.com/issues/51183
362
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
363
364 63 Venky Shankar
h3. 2022 Jun 17
365
366
https://pulpito.ceph.com/yuriw-2022-06-15_17:11:32-fs-wip-yuri10-testing-2022-06-15-0732-pacific-distro-default-smithi/
367
368
* https://tracker.ceph.com/issues/52624
369
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
370
* https://tracker.ceph.com/issues/53360
371
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
372
* https://tracker.ceph.com/issues/51183
373
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
374
375 62 Venky Shankar
h3. 2022 Jun 16
376
377
https://pulpito.ceph.com/yuriw-2022-06-15_17:08:32-fs-wip-yuri-testing-2022-06-14-0744-pacific-distro-default-smithi/
378
379
* https://tracker.ceph.com/issues/52624
380
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
381
* https://tracker.ceph.com/issues/53360
382
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
383
* https://tracker.ceph.com/issues/55449
384
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
385
* https://tracker.ceph.com/issues/51267
386
        CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
387
* https://tracker.ceph.com/issues/55332
388
        Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
389
390 61 Venky Shankar
h3. 2022 Jun 15
391
392
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_13:32:29-fs-wip-yuri10-testing-2022-06-08-0730-pacific-distro-default-smithi/
393
394
* https://tracker.ceph.com/issues/52624
395
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
396
* https://tracker.ceph.com/issues/53360
397
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
398
* https://tracker.ceph.com/issues/55449
399
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
400
401
402 60 Venky Shankar
h3. 2022 Jun 10
403
404
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2022-06-07-1529-pacific
405
406
* https://tracker.ceph.com/issues/52624
407
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
408
* https://tracker.ceph.com/issues/53360
409
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
410
* https://tracker.ceph.com/issues/55449
411
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
412
413 59 Venky Shankar
h3. 2022 Jun 09
414
415
https://pulpito.ceph.com/yuriw-2022-06-07_16:05:25-fs-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
416
417
* https://tracker.ceph.com/issues/52624
418
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
419
* https://tracker.ceph.com/issues/53360
420
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
421
* https://tracker.ceph.com/issues/55449
422
        pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh
423
424 58 Venky Shankar
h3. 2022 May 06
425
426
https://pulpito.ceph.com/?sha1=73636a1b00037ff974bcdc969b009c5ecec626cc
427
428
* https://tracker.ceph.com/issues/52624
429
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
430
* https://tracker.ceph.com/issues/53360
431
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
432
433 57 Venky Shankar
h3. 2022 April 18
434
435
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-04-18-0609-pacific
436
(only mgr/snap_schedule backport pr)
437
438
* https://tracker.ceph.com/issues/52624
439
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
440 56 Venky Shankar
441
h3. 2022 March 28
442
443
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_20:52:40-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
444
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-26_19:52:48-fs-wip-yuri4-testing-2022-03-25-1203-pacific-distro-default-smithi/
445
446
* https://tracker.ceph.com/issues/52624
447
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
448
* https://tracker.ceph.com/issues/53360
449
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
450
* https://tracker.ceph.com/issues/54411
451
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
452
	33 daemons have recently crashed" during suites/fsstress.sh
453
454 55 Venky Shankar
h3. 2022 March 25
455
456
https://pulpito.ceph.com/yuriw-2022-03-22_18:42:28-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
457
https://pulpito.ceph.com/yuriw-2022-03-23_14:04:56-fs-wip-yuri8-testing-2022-03-22-0910-pacific-distro-default-smithi/
458
459
* https://tracker.ceph.com/issues/52624
460
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
461
* https://tracker.ceph.com/issues/52606
462
        qa: test_dirfrag_limit
463
* https://tracker.ceph.com/issues/51905
464
        qa: "error reading sessionmap 'mds1_sessionmap'"
465
* https://tracker.ceph.com/issues/53360
466
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
467
* https://tracker.ceph.com/issues/51183
468
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
469 47 Patrick Donnelly
470 54 Venky Shankar
h3. 2022 March 22
471
472
https://pulpito.ceph.com/yuriw-2022-03-17_15:03:06-fs-wip-yuri10-testing-2022-03-16-1432-pacific-distro-default-smithi/
473
474
* https://tracker.ceph.com/issues/52624
475
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
476
* https://tracker.ceph.com/issues/52606
477
        qa: test_dirfrag_limit
478
* https://tracker.ceph.com/issues/51183
479
        Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
480
* https://tracker.ceph.com/issues/51905
481
        qa: "error reading sessionmap 'mds1_sessionmap'"
482
* https://tracker.ceph.com/issues/53360
483
        pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
484
* https://tracker.ceph.com/issues/54411
485
	mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available;
486
	33 daemons have recently crashed" during suites/fsstress.sh
487 47 Patrick Donnelly
488 46 Kotresh Hiremath Ravishankar
h3. 2021 November 22
489
490
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pacific-distro-basic-smithi
491
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smithi
492
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-06-1322-pacific-distro-basic-smithi
493
494
495
* https://tracker.ceph.com/issues/53300
496
	qa: cluster [WRN] Scrub error on inode
497
* https://tracker.ceph.com/issues/53302
498
	qa: sudo logrotate /etc/logrotate.d/ceph-test.conf failed with status 1
499
* https://tracker.ceph.com/issues/53314
500
	qa: fs/upgrade/mds_upgrade_sequence test timeout
501
* https://tracker.ceph.com/issues/53316
502
	qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
503
* https://tracker.ceph.com/issues/52624
504
	qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
505
* https://tracker.ceph.com/issues/52396
506
	pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
507
* https://tracker.ceph.com/issues/52875
508
	pacific: qa: test_dirfrag_limit
509
* https://tracker.ceph.com/issues/51705
510 1 Patrick Donnelly
	pacific: qa: tasks.cephfs.fuse_mount:mount command failed
511
* https://tracker.ceph.com/issues/39634
512
	qa: test_full_same_file timeout
513 46 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/49748
514
	gibba: Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds
515 48 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51964
516
	qa: test_cephfs_mirror_restart_sync_on_blocklist failure
517 49 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/50223
518
	qa: "client.4737 isn't responding to mclientcaps(revoke)"
519 48 Kotresh Hiremath Ravishankar
520 47 Patrick Donnelly
521
h3. 2021 November 20
522
523
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211120.024903-pacific
524
525
* https://tracker.ceph.com/issues/53360
526
    pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only support [2]"
527 46 Kotresh Hiremath Ravishankar
528 41 Patrick Donnelly
h3. 2021 September 14 (QE)
529
530
https://pulpito.ceph.com/yuriw-2021-09-10_15:01:12-fs-pacific-distro-basic-smithi/
531 1 Patrick Donnelly
http://pulpito.front.sepia.ceph.com/yuriw-2021-09-13_14:09:59-fs-pacific-distro-basic-smithi/
532 43 Ramana Raja
https://pulpito.ceph.com/yuriw-2021-09-14_15:09:45-fs-pacific-distro-basic-smithi/
533 41 Patrick Donnelly
534
* https://tracker.ceph.com/issues/52606
535
    qa: test_dirfrag_limit
536
* https://tracker.ceph.com/issues/52607
537
    qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)"
538 44 Ramana Raja
* https://tracker.ceph.com/issues/51705
539
    qa: tasks.cephfs.fuse_mount:mount command failed
540 41 Patrick Donnelly
541 40 Ramana Raja
h3. 2021 Sep 7
542
543
https://pulpito.ceph.com/yuriw-2021-09-07_17:38:38-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
544
https://pulpito.ceph.com/yuriw-2021-09-07_23:57:28-fs-wip-yuri-testing-2021-09-07-0905-pacific-distro-basic-smithi/
545
546
547
* https://tracker.ceph.com/issues/52396
548
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
549
* https://tracker.ceph.com/issues/51705
550
    qa: tasks.cephfs.fuse_mount:mount command failed
551 4 Patrick Donnelly
552 34 Ramana Raja
h3. 2021 Aug 30
553
554
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2021-08-30-0930-pacific
555
556
* https://tracker.ceph.com/issues/45434
557
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
558
* https://tracker.ceph.com/issues/51705
559
    qa: tasks.cephfs.fuse_mount:mount command failed
560
* https://tracker.ceph.com/issues/52396
561
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
562 35 Ramana Raja
* https://tracker.ceph.com/issues/52487
563 36 Ramana Raja
    qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation)
564
* https://tracker.ceph.com/issues/51267
565 37 Ramana Raja
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh)
566
* https://tracker.ceph.com/issues/48772
567
   qa: pjd: not ok 9, 44, 80
568 35 Ramana Raja
569 34 Ramana Raja
570 31 Ramana Raja
h3. 2021 Aug 23
571
572
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/
573
574
* https://tracker.ceph.com/issues/45434
575
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
576
* https://tracker.ceph.com/issues/51705
577
    qa: tasks.cephfs.fuse_mount:mount command failed
578 32 Ramana Raja
* https://tracker.ceph.com/issues/52396
579 33 Ramana Raja
    qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
580
* https://tracker.ceph.com/issues/52397
581
    qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
582
583 31 Ramana Raja
584 29 Ramana Raja
h3. 2021 Aug 11
585
586
https://pulpito.ceph.com/yuriw-2021-08-11_14:28:23-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
587
https://pulpito.ceph.com/yuriw-2021-08-12_15:32:35-fs-wip-yuri8-testing-2021-08-09-0844-pacific-distro-basic-smithi/
588
589
* https://tracker.ceph.com/issues/45434
590
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
591
* https://tracker.ceph.com/issues/51705
592
    qa: tasks.cephfs.fuse_mount:mount command failed
593 30 Ramana Raja
* https://tracker.ceph.com/issues/50222
594
    osd: 5.2s0 deep-scrub : stat mismatch
595 29 Ramana Raja
596 19 Jos Collin
h3. 2021 July 15
597 20 Jos Collin
598 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-13_17:37:59-fs-wip-yuri-testing-2021-07-13-0812-pacific-distro-basic-smithi/
599
600 26 Jos Collin
* https://tracker.ceph.com/issues/45434
601
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
602
* https://tracker.ceph.com/issues/51705
603
    qa: tasks.cephfs.fuse_mount:mount command failed
604 27 Jos Collin
* https://tracker.ceph.com/issues/51183
605
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
606 28 Jos Collin
* https://tracker.ceph.com/issues/50528
607
    qa: fs:thrash: pjd suite not ok 80
608
* https://tracker.ceph.com/issues/51706
609
    qa: osd deep-scrub stat mismatch
610 26 Jos Collin
611 19 Jos Collin
h3. 2021 July 13
612 20 Jos Collin
613 19 Jos Collin
https://pulpito.ceph.com/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testing-2021-07-08-1142-pacific-distro-basic-smithi/
614
615 21 Jos Collin
* https://tracker.ceph.com/issues/51704
616
    Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
617 23 Jos Collin
* https://tracker.ceph.com/issues/45434
618
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
619 24 Jos Collin
* https://tracker.ceph.com/issues/51705
620 25 Jos Collin
    qa: tasks.cephfs.fuse_mount:mount command failed
621
* https://tracker.ceph.com/issues/48640
622 24 Jos Collin
    qa: snapshot mismatch during mds thrashing
623 21 Jos Collin
624 18 Patrick Donnelly
h3. 2021 June 29 (Integration Branch)
625
626
https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/
627
628
Some failures caused by incomplete backup of: https://github.com/ceph/ceph/pull/42065
629
Some package failures caused by nautilus missing package, like: https://pulpito.ceph.com/yuriw-2021-06-29_00:15:33-fs-wip-yuri3-testing-2021-06-28-1259-pacific-distro-basic-smithi/6241300/
630
631
* https://tracker.ceph.com/issues/45434
632
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
633
* https://tracker.ceph.com/issues/50260
634
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
635
* https://tracker.ceph.com/issues/51183
636
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
637
638
639 16 Jeff Layton
h3. 2021 June 28
640
641
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/
642
643
* https://tracker.ceph.com/issues/45434
644
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
645
* https://tracker.ceph.com/issues/51440
646 17 Jeff Layton
    fallocate fails with EACCES
647 16 Jeff Layton
* https://tracker.ceph.com/issues/51264
648
    TestVolumeClient failure
649
* https://tracker.ceph.com/issues/51266
650
    test cleanup failure
651
* https://tracker.ceph.com/issues/51183
652
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) 
653
654 12 Jeff Layton
h3. 2021 June 14
655
656
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-06-14-0717-pacific-distro-basic-smithi/
657
 
658
* https://tracker.ceph.com/issues/45434
659
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
660
* https://bugzilla.redhat.com/show_bug.cgi?id=1973276
661
    Could not reconnect to ubuntu@smithi076.front.sepia.ceph.com
662
* https://tracker.ceph.com/issues/51263
663
    pjdfstest rename test 10.t failed with EACCES
664
* https://tracker.ceph.com/issues/51264
665
    TestVolumeClient failure
666 13 Jeff Layton
* https://tracker.ceph.com/issues/51266
667
    Command failed on smithi204 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 
668 14 Jeff Layton
* https://tracker.ceph.com/issues/50279
669
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
670 15 Jeff Layton
* https://tracker.ceph.com/issues/51267
671
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1
672 11 Patrick Donnelly
673
h3. 2021 June 07 (Integration Branch)
674
675
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2021-06-07-0955-pacific
676
677
* https://tracker.ceph.com/issues/45434
678
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
679
* https://tracker.ceph.com/issues/50279
680
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
681
* https://tracker.ceph.com/issues/48773
682
    qa: scrub does not complete
683
* https://tracker.ceph.com/issues/51170
684
    pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
685
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
686
    qa: quota failure
687
688
689 6 Patrick Donnelly
h3. 2021 Apr 28 (QE pre-release)
690
691
https://pulpito.ceph.com/yuriw-2021-04-28_19:27:30-fs-pacific-distro-basic-smithi/
692
https://pulpito.ceph.com/yuriw-2021-04-30_13:46:58-fs-pacific-distro-basic-smithi/
693
694
* https://tracker.ceph.com/issues/45434
695
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
696
* https://tracker.ceph.com/issues/50258
697
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
698
* https://tracker.ceph.com/issues/50260
699
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
700
* https://tracker.ceph.com/issues/49962
701
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
702
* https://tracker.ceph.com/issues/50016
703
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
704
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
705
    qa: quota failure
706
* https://tracker.ceph.com/issues/50528
707
    pacific: qa: fs:thrash: pjd suite not ok 20
708
709
710 3 Patrick Donnelly
h3. 2021 Apr 22 (Integration Branch)
711
712
https://pulpito.ceph.com/yuriw-2021-04-22_16:46:11-fs-wip-yuri5-testing-2021-04-20-0819-pacific-distro-basic-smithi/
713
714
* https://tracker.ceph.com/issues/50527
715
    pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
716
* https://tracker.ceph.com/issues/50528
717
    pacific: qa: fs:thrash: pjd suite not ok 20
718
* https://tracker.ceph.com/issues/49500 (fixed in another integration run)
719
    qa: "Assertion `cb_done' failed."
720
* https://tracker.ceph.com/issues/49500
721
    qa: "Assertion `cb_done' failed."
722
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
723
    qa: quota failure
724
* https://tracker.ceph.com/issues/45434
725
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
726
* https://tracker.ceph.com/issues/50279
727
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
728
* https://tracker.ceph.com/issues/50258
729
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
730
* https://tracker.ceph.com/issues/49962
731
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
732
* https://tracker.ceph.com/issues/49962
733
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
734
* https://tracker.ceph.com/issues/50530
735
    pacific: client: abort after MDS blocklist
736
737 2 Patrick Donnelly
738
h3. 2021 Apr 21 (Integration Branch)
739
740
https://pulpito.ceph.com/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-0937-pacific-distro-basic-smithi/
741
742
* https://tracker.ceph.com/issues/45434
743
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
744
* https://tracker.ceph.com/issues/50250
745
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
746
* https://tracker.ceph.com/issues/50258
747
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
748
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
749
    qa: quota failure
750
* https://tracker.ceph.com/issues/50016
751
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
752
* https://tracker.ceph.com/issues/50495
753
    pacific: client: shutdown race fails with status 141
754
755
756 1 Patrick Donnelly
h3. 2021 Apr 07 (Integration Branch)
757
758
https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/
759
760
* https://tracker.ceph.com/issues/45434
761
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
762
* https://tracker.ceph.com/issues/48805
763
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
764
* https://tracker.ceph.com/issues/49500
765
    qa: "Assertion `cb_done' failed."
766
* https://tracker.ceph.com/issues/50258 (new)
767
    pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
768
* https://tracker.ceph.com/issues/49962
769
    'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes
770
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
771
    qa: quota failure
772
* https://tracker.ceph.com/issues/50260
773
    pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty"
774
* https://tracker.ceph.com/issues/50016
775
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"