Project

General

Profile

Bug #65308

Updated by Rishabh Dave about 1 month ago

Link to the failure - https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/7625691/. 

 Description of this job - 
 <pre> 
 fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/ignorelist_health} 
 </pre> 

 Failure reason - 
 <pre> 
 "2024-03-27T08:20:00.000142+0000 mon.smithi028 (mon.0) 868 : cluster [ERR] Health detail: HEALTH_ERR 1 filesystem is degraded; 1 filesystem is offline" in cluster log  
 </pre> 

 Health warning for FS being offline @FS is offline@ is expected since the command @ceph fs fail@ was run before running @ceph fs volume rename@. run. But the health warning for FS being degraded @FS is degraded@ isn't expected. This warning is first seen in output of command - 
 <pre> 
 2024-03-27T08:19:56.103 DEBUG:teuthology.orchestra.run.smithi028:> sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:155268c4e432a12433aa833f174f9fe3b1016ae0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 288ca2e2-ec11-11ee-95d0-87774f69a715 -- bash -c 'ceph fs set foo refuse_client_session true 
 </pre> 

 Code for this is located here - https://github.com/ceph/ceph/blob/main/qa/suites/fs/cephadm/renamevolume/1-rename.yaml#L5 

 Same failure, but for different job, was also seen in Milind's run as well - https://pulpito.ceph.com/mchangir-2024-03-22_09:49:57-fs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/7616441/.

Back