Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2022-02-03T15:44:57ZCeph
Redmine rbd - Bug #54129 (In Progress): Removing non-existent rbd mirror schedule returns with successhttps://tracker.ceph.com/issues/541292022-02-03T15:44:57ZSunny Kumar
<p>sunny@o05 build -> rbd mirror snapshot schedule ls --pool data --cluster site-a --recursive<br />POOL NAMESPACE IMAGE SCHEDULE<br />data image1 every 5m<br />sunny@o05 build -> rbd --cluster site-a mirror snapshot schedule remove 1m --pool data --image image1</p>
<p>---------<br />We can add some validation here when we try to remove non-existent schedule.</p>
<p>Ex:<br />sunny@o05 build -> rbd mirror snapshot schedule ls --pool data --cluster site-1 --recursive<br />POOL NAMESPACE IMAGE SCHEDULE<br />data image1 every 5m<br />sunny@o05 build -> rbd --cluster site-1 mirror snapshot schedule remove 1m --pool data --image image1<br />rbd: rbd mirror snapshot schedule remove failed: (22) Invalid argument: Invalid schedule</p> rbd - Bug #53966 (In Progress): rbd mirror snahpshot scheduler picks only single schedule per imagehttps://tracker.ceph.com/issues/539662022-01-21T16:03:05ZSunny Kumar
<p>rbd mirror snapshot schedule picks single schedule per image.</p>
<p>Steps to reproduce:<br />1. Enable snapshot based mirroring<br />2. Add 2 schedule for the image participating in mirroring<br />3. Observe snapshot creation</p>
<p>The scheduler queue only adds single schedule per image.</p> rbd - Bug #53915 (Resolved): rbd snapshot schedule status output is missing schedulehttps://tracker.ceph.com/issues/539152022-01-18T14:56:50ZSunny Kumar
<p>rbd snapshot schedule status output is missing schedule after 60 secs (after load())</p>
<p>Steps to reproduce:</p>
<p>- Enable mirroring and ensure RBD mirror daemons are running<br />- Add a new schedule for image<br />- List schedule<br />- Wait for 60 sec<br />- List schedule</p>
<p>=============</p>
<p>sunny@o05 build -> rbd mirror snapshot schedule ls --pool data --cluster site-11 --recursive<br />POOL NAMESPACE IMAGE SCHEDULE</p>
<p>sunny@o05 build -> rbd mirror snapshot schedule ls --pool data --cluster site-11 --recursive --image image1<br />POOL NAMESPACE IMAGE SCHEDULE<br />data image1 every 5m</p>
<p>========</p> rbd - Bug #53914 (Resolved): rbd mirror snapshot schedule is not working properly after a few fai...https://tracker.ceph.com/issues/539142022-01-18T14:48:32ZSunny Kumar
<p>Steps to reproduce:<br />[NOTE: this is not a consistent reproducer]<br />- Enable mirroring and ensure RBD mirror daemons are running<br />- Demote image on primary cluster<br />- Promote image on the Secondary cluster<br />- Do IO on Secondary<br />- Sleep for 75 seconds (considering schedule is for 60 sec)<br />- Verify snapshots are being created for added schedule<br />- Demote on Secondary<br />- Promote on Primary<br />- Sleep for 75 seconds (considering schedule is for 60 sec)<br />- Verify snapshots are being created for added schedule</p>
<p>Sometimes snapshots are not created by scheduler.</p> rbd - Bug #53250 (Resolved): [rbd_support] passing invalid interval removes entire schedulehttps://tracker.ceph.com/issues/532502021-11-12T17:14:09ZSunny Kumar
<p>If we provide a random string in the snapshot remove command the entire schedule associated with the image is getting removed.</p>
<p>------------------------<br />sunny@ws build -> rbd mirror snapshot schedule ls --pool data --image image1 --cluster site-a --recursive</p>
<p>POOL NAMESPACE IMAGE SCHEDULE</p>
<p>sunny@ws build -> rbd mirror snapshot schedule add --pool data --image image1 1m --cluster site-a</p>
<p>sunny@ws build -> rbd mirror snapshot schedule ls --pool data --image image1 --cluster site-a --recursive</p>
<p>POOL NAMESPACE IMAGE SCHEDULE<br />data image1 every 1m</p>
<p>sunny@ws build -> rbd mirror snapshot schedule ls --pool data --image image1 --cluster site-a --recursive</p>
<p>POOL NAMESPACE IMAGE SCHEDULE<br />data image1 every 1m</p>
<p>sunny@ws build -> rbd mirror snapshot schedule add --pool data --image image1 1h --cluster site-a</p>
<p>sunny@ws build -> rbd mirror snapshot schedule add --pool data --image image1 5m --cluster site-a</p>
<p>sunny@ws build -> rbd mirror snapshot schedule ls --pool data --image image1 --cluster site-a --recursive</p>
<p>POOL NAMESPACE IMAGE SCHEDULE <br />data image1 every 1m, every 1h, every 5m</p>
<p>sunny@ws build -> rbd mirror snapshot schedule remove --pool data --image image1 test --cluster site-a</p>
<p>rbd: rbd mirror snapshot schedule remove failed: (22) Invalid argument: Invalid interval (test)</p>
<p>sunny@ws build -> rbd mirror snapshot schedule ls --pool data --image image1 --cluster site-a --recursive</p>
<p>POOL NAMESPACE IMAGE SCHEDULE</p> Ceph - Bug #47971 (Resolved): do_cmake: build fails on fedora-33 due to python versionhttps://tracker.ceph.com/issues/479712020-10-23T15:06:11ZSunny Kumar
<p>Traceback:<br />CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:165 (message):<br />Could NOT find Python3: Found unsuitable version "3.9.0", but required is<br />exact version "3.8" (found /usr/bin/python3, found components: Interpreter<br />Development)</p> mgr - Bug #47645 (New): mgr: self-test fails on devicehealth modulehttps://tracker.ceph.com/issues/476452020-09-24T21:40:42ZSunny Kumar
<p>Steps to reproduce:</p>
<p>1. MDS=0 MGR=1 OSD=2 MON=1 ../src/vstart.sh -n -b --without-dashboard --debug<br />2. ./bin/ceph mgr module enable selftest<br />3. /bin/ceph mgr self-test module devicehealth</p>
<p>Output:</p>
<p>Error EPERM: Test failed: Remote method threw exception: Traceback (most recent call last):<br /> File "/home/sunny/lab/ceph/src/pybind/mgr/devicehealth/module.py", line 214, in self_test<br /> assert before != after<br />AssertionError</p>
<p>-------------<br />mgr.x.log<br />-------------<br />2020-09-24T14:25:21.939+0100 7f4950f5c700 0 [devicehealth ERROR root] Fail to parse JSON result from daemon mon.a ()<br />2020-09-24T14:25:21.939+0100 7f4950f5c700 1 -- 192.168.0.5:0/2012805582 --> [v2:192.168.0.5:6810/13776,v1:192.168.0.5:6811/13776] -- osd_op(unknown.0.0:2 1.0 1:9eeed0c4:::SAMSUNG_MZVLB256HAHQ-000L7_S41GNX3M542660:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected e16) v8 -- 0x558e62c99400 con 0x558e62ba3180<br />2020-09-24T14:25:21.940+0100 7f49714db700 1 -- 192.168.0.5:0/2012805582 <== osd.1 v2:192.168.0.5:6810/13776 2 ==== osd_op_reply(2 SAMSUNG_MZVLB256HAHQ-000L7_S41GNX3M542660 [omap-get-vals] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 185+0+0 (secure 0 0 0) 0x558e62d02240 con 0x558e62ba3180<br />2020-09-24T14:25:21.941+0100 7f4950f5c700 20 mgr ~Gil Destroying new thread state 0x558e62a6d9e0<br />2020-09-24T14:25:21.941+0100 7f4950f5c700 -1 Remote method threw exception: Traceback (most recent call last):<br /> File "/home/sunny/lab/ceph/src/pybind/mgr/devicehealth/module.py", line 214, in self_test<br /> assert before != after <br />AssertionError</p>
<p>2020-09-24T14:25:21.941+0100 7f4950f5c700 20 mgr ~Gil Destroying new thread state 0x558e62974d80<br />2020-09-24T14:25:21.941+0100 7f4950f5c700 -1 mgr.server reply reply (1) Operation not permitted Test failed: Remote method threw exception: Traceback (most recent call last):<br /> File "/home/sunny/lab/ceph/src/pybind/mgr/devicehealth/module.py", line 214, in self_test<br /> assert before != after <br />AssertionError</p>
<p>2020-09-24T14:25:21.941+0100 7f4950f5c700 1 -- [v2:192.168.0.5:6800/14559,v1:192.168.0.5:6801/14559] --> 192.168.0.5:0/314104114 -- mgr_command_reply(tid 0: -1 Test failed: Remote method threw exception: Traceback (most recent call last):<br /> File "/home/sunny/lab/ceph/src/pybind/mgr/devicehealth/module.py", line 214, in self_test<br /> assert before != after <br />AssertionError</p> Ceph - Bug #46695 (New): install-deps is missing python-yaml dependency on Fedora32https://tracker.ceph.com/issues/466952020-07-23T15:54:13ZSunny Kumar
<p>On Fedora 32 machine intall-deps.sh is unable to install few dependencies.</p>
<p>Observation:</p>
<p>- Its being collected and added to wheel:</p>
<p>Collecting pyyaml<br /> Downloading PyYAML-5.3.1.tar.gz (269 kB)
|████████████████████████████████| 269 kB 8.2 MB/s <br />Collecting six<br /> File was already downloaded /root/ceph/src/pybind/mgr/dashboard/wheelhouse-wip/six-1.15.0-py2.py3-none-any.whl<br />Collecting portend>=2.1.1</p>
<p>- Building wheel for pyyaml (setup.py) ... done<br /> Created wheel for pyyaml: filename=PyYAML-5.3.1-cp38-cp38-linux_x86_64.whl size=44617 sha256=d2b69df1309495c1baae2052b26116f636aba6224f834191c2d911ac31bc60ad</p>
<p>- But, after compilation and running vstart traceback can be seen in mgr log</p>
<p>ModuleNotFoundError: No module named 'yaml'</p> sepia - Support #45975 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/459752020-06-11T12:48:41ZSunny Kumar
<p>1) Do you just need VPN access or will you also be running teuthology jobs?</p>
<p>Access to both will be great.</p>
<p>2) Desired Username: <br />sunny</p>
<p>3) Alternate e-mail address(es) we can reach you at:</p>
<p><a class="email" href="mailto:sunkumar@redhat.com">sunkumar@redhat.com</a><br />4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?</p>
<p>Yes, Josh Durgin (<a class="email" href="mailto:jdurgin@redhat.com">jdurgin@redhat.com</a>)</p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDs2+6XwRPdiDHT+63Yab00VBulg96/OLEORtNUvm7TDRMpcx4OD2oOnbsE4rG3dOdXNgT17H15c6MWFX2a2uC2h1j4hJTa+dG567HrYXRHG60Kwg5Sviiuq/ETPoH/r3OSvHs0HJ4Y9qYPQ5j0k7gG1XgIY3QHDWNRsXiyRXP9FEfoaOKABDTUoLPjqRX+4ExJd1Nz6DLxqIlre8wfqF5L1l12vec8JU3HB5NSRa87HxKFtLLIAtw7B9H/HggqX5wWVEnXq2qN4o6OOePhaO5cjWAZxOtT+AJTyHkeyaWLwIGOHBVZS5BKti6jLc+PxQxAflYlu9X3xh/2FlA+9H+ABZwMHE3ruUfTVbviYBbtccMCvkzcvFgnvaUCHjd7Iv/T8/JHNfBgFgigcX0Vei0+PZQ5oC5b7YpKT241NA8DCGZNgr1KuL9LazkvbkbpDcaHUOrkWsmSW1RRMC6ffYkbAI9QiBJFpImDAWP1u+1yCGhzPKBfGNt91GcLCD6ML90= sunny@devBox
</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>sunny@devBox 81r/AKNOWhvaD0ESYy6tTg 58b86902f858bc3f1edb12cfc41ce10ed163985db2f7c253de08b7f1c310fe11 </pre></p>