Activity
From 06/05/2022 to 07/04/2022
07/04/2022
- 09:36 PM Backport #54242 (Resolved): octopus: mds: clients can send a "new" op (file operation) and crash ...
- 09:28 PM Backport #54241 (Resolved): pacific: mds: clients can send a "new" op (file operation) and crash ...
- 09:16 PM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
- 09:10 PM Backport #55540 (Resolved): quincy: cephfs-top: multiple file system support
- 09:03 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
- 09:02 PM Backport #55428 (Resolved): quincy: unaccessible dentries after fsstress run with namespace-restr...
- 09:00 PM Backport #55626 (Resolved): quincy: cephfs-shell: put command should accept both path mandatorily...
- 09:00 PM Backport #55628 (Resolved): quincy: cephfs-shell: creates directories in local file system even i...
- 08:59 PM Backport #55630 (Resolved): quincy: cephfs-shell: saving files doesn't work as expected
- 03:15 PM Backport #56462 (Resolved): pacific: mds: crash due to seemingly unrecoverable metadata error
- https://github.com/ceph/ceph/pull/47433
- 03:15 PM Backport #56461 (Resolved): quincy: mds: crash due to seemingly unrecoverable metadata error
- https://github.com/ceph/ceph/pull/47432
- 03:10 PM Bug #54384 (Pending Backport): mds: crash due to seemingly unrecoverable metadata error
- 12:42 PM Bug #52438 (Resolved): qa: ffsb timeout
- 12:39 PM Bug #54106 (Duplicate): kclient: hang during workunit cleanup
- This is duplicated to https://tracker.ceph.com/issues/55857.
- 12:26 PM Bug #56282 (In Progress): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state(...
- 08:59 AM Backport #56056 (In Progress): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): as...
- 08:48 AM Backport #56055 (In Progress): quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): ass...
- 03:00 AM Backport #56449 (Resolved): pacific: pjd failure (caused by xattr's value not consistent between ...
- https://github.com/ceph/ceph/pull/47056
- 03:00 AM Backport #56448 (Resolved): quincy: pjd failure (caused by xattr's value not consistent between a...
- https://github.com/ceph/ceph/pull/47057
- 02:58 AM Bug #55331 (Pending Backport): pjd failure (caused by xattr's value not consistent between auth M...
- 02:44 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- Seen here: https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-defa...
07/03/2022
- 12:02 PM Support #56443 (New): OSD USED Size contains unknown data
- Hi,
We have a problem, that the POOL recognizes information in a size of ~1 GB, it is associated with a type of SS...
07/02/2022
- 07:13 PM Feature #56442 (New): mds: build asok command to dump stray files and associated caps
- To diagnose what is delaying reintegration or deletion.
- 01:07 PM Bug #55762 (Fix Under Review): mgr/volumes: Handle internal metadata directories under '/volumes'...
- 01:06 PM Backport #56014 (Resolved): pacific: quota support for subvolumegroup
- 01:04 PM Feature #55401 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) for subvolu...
- 01:04 PM Backport #55802 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) f...
07/01/2022
- 05:42 PM Backport #51323: octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45159
merged - 01:37 PM Backport #52634: octopus: mds sends cap updates with btime zeroed out
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45164
merged - 01:36 PM Backport #50914: octopus: MDS heartbeat timed out between during executing MDCache::start_files_t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45157
merged - 01:26 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
- /a/yuriw-2022-06-30_22:36:16-upgrade:pacific-x-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907793/
- 07:02 AM Bug #54111 (Resolved): data pool attached to a file system can be attached to another file system
- 04:08 AM Feature #55121 (Closed): cephfs-top: new options to limit and order-by
- Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer reported in BZ[1] is p...
- 03:17 AM Bug #56435: octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), after wait...
- The clients have been unregistered at *_2022-06-24T20:00:11_*:...
- 03:13 AM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
- /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
The unre... - 03:01 AM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- Laura Flores wrote:
> /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smi...
06/30/2022
- 08:18 PM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
- 06:55 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
- /a/yuriw-2022-06-30_14:20:05-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907396/
- 04:57 PM Bug #56384 (Resolved): ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- 10:04 AM Feature #56428 (New): add command "fs deauthorize"
- Since entity auth keyrings can now hold auth caps for multiple Ceph FSs, it is very tedious and very error-prone to r...
06/29/2022
- 08:38 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Venky Shankar wrote:
> Dhairya, please do the backport.
https://github.com/ceph/ceph/pull/46901 - 02:15 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya, please do the backport.
- 07:44 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- https://github.com/ceph/ceph/pull/46899
- 07:41 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Venky Shankar wrote:
> Dhairya, please do the backport.
Okay,sure. - 02:15 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya, please do the backport.
- 04:49 PM Bug #52123: mds sends cap updates with btime zeroed out
- Not sure what has to happen to unwedge this backport.
- 02:48 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Posting an update here based on discussion between me, Greg and Patrick:
Short term plan: Helper script to identif... - 11:08 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
- Method conduct_neg_test_for_write_caps() in qa/tasks/cephfs/caps_helper.py appends path to command arguments but does...
- 09:22 AM Bug #56414 (Fix Under Review): mounting subvolume shows size/used bytes for entire fs, not subvolume
- 09:18 AM Bug #56414 (In Progress): mounting subvolume shows size/used bytes for entire fs, not subvolume
- Hit the same issue in libcephfs.
- 09:18 AM Bug #56414 (Resolved): mounting subvolume shows size/used bytes for entire fs, not subvolume
- When mounting a subvolume at the base dir of the subvolume, the kernel client correctly shows the size/usage of a sub...
- 01:02 AM Backport #56110 (Resolved): pacific: client: choose auth MDS for getxattr with the Xs caps
- 01:01 AM Backport #56105 (Resolved): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown err...
- 01:00 AM Backport #56016 (Resolved): pacific: crash just after MDS become active
- 01:00 AM Bug #54411 (Resolved): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 f...
- 01:00 AM Backport #55449 (Resolved): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm ...
- 12:59 AM Backport #55993 (Resolved): pacific: client: switch to glibc's STATX macros
- 12:58 AM Backport #55935 (Resolved): pacific: client: infinite loop "got ESTALE" after mds recovery
- 12:58 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
- 12:57 AM Backport #55660 (Resolved): pacific: qa: add test case for fsync crash issue
- 12:56 AM Backport #55757 (Resolved): pacific: mds: flush mdlog if locked and still has wanted caps not sat...
06/28/2022
- 04:46 PM Bug #17594 (In Progress): cephfs: permission checking not working (MDS should enforce POSIX permi...
- 04:19 PM Bug #53045 (Resolved): stat->fsid is not unique among filesystems exported by the ceph server
- 04:04 PM Bug #53765 (Resolved): mount helper mangles the new syntax device string by qualifying the name
- 04:04 PM Fix #52068: qa: add testing for "ms_mode" mount option
- This appears to be waiting for a pacific backport.
- 04:00 PM Fix #52068: qa: add testing for "ms_mode" mount option
- I think this is in now, right?
- 04:02 PM Bug #50719 (Can't reproduce): xattr returning from the dead (sic!)
- No response in several months. Closing case. Ralph, feel free to reopen if you have more info to share.
- 03:58 PM Bug #52134 (Can't reproduce): botched cephadm upgrade due to mds failures
- Haven't seen this in some time.
- 03:53 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
- I don't see us fixing this in order to get local atime semantics. Closing WONTFIX.
- 03:52 PM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- Handing this back to Patrick for now. I haven't seen this occur myself. Is this still a problem? Should we close it out?
- 03:17 PM Backport #56105: pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46802
merged - 03:15 PM Backport #56110: pacific: client: choose auth MDS for getxattr with the Xs caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46799
merged - 03:15 PM Backport #55449: pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46798
merged - 03:14 PM Backport #56016: pacific: crash just after MDS become active
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46682
merged - 03:14 PM Backport #55993: pacific: client: switch to glibc's STATX macros
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46679
merged - 03:12 PM Backport #54577: pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pa...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46646
merged - 03:11 PM Backport #55935: pacific: client: infinite loop "got ESTALE" after mds recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46557
merged - 03:10 PM Backport #55660: pacific: qa: add test case for fsync crash issue
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46425
merged - 03:10 PM Backport #55659: pacific: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46424
merged - 03:08 PM Backport #55757: pacific: mds: flush mdlog if locked and still has wanted caps not satisfied
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46423
merged - 01:45 PM Feature #55821 (Fix Under Review): pybind/mgr/volumes: interface to check the presence of subvolu...
- 01:31 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- Hi there, sorry for delays, this was very tricky to get info on as it did not reproduce outside of our CI. So it requ...
- 12:28 PM Bug #53214 (Resolved): qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42...
- 12:00 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Venky Shankar wrote:
> Xiubo, please take a look.
Sure. - 10:50 AM Bug #56282 (Triaged): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ==...
- 10:50 AM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Xiubo, please take a look.
- 11:30 AM Bug #56384 (Fix Under Review): ceph/test.sh: check_response erasure-code didn't find erasure-code...
- 09:55 AM Bug #56380: crash: Client::_get_vino(Inode*)
- Venky Shankar wrote:
> Xiubo Li wrote:
> > This should be fixed by https://github.com/ceph/ceph/pull/45614, in http... - 09:46 AM Bug #56380 (Duplicate): crash: Client::_get_vino(Inode*)
- Dup: https://tracker.ceph.com/issues/54653
- 09:45 AM Bug #56380: crash: Client::_get_vino(Inode*)
- Xiubo Li wrote:
> This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/p... - 06:53 AM Bug #56380: crash: Client::_get_vino(Inode*)
- This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif...
- 09:54 AM Backport #56113 (Rejected): pacific: data pool attached to a file system can be attached to anoth...
- 09:53 AM Backport #56114 (Rejected): quincy: data pool attached to a file system can be attached to anothe...
- 09:48 AM Bug #56263 (Duplicate): crash: Client::_get_vino(Inode*)
- Dup: https://tracker.ceph.com/issues/54653
- 06:53 AM Bug #56263: crash: Client::_get_vino(Inode*)
- This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif...
- 07:02 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- Should be fixed by https://tracker.ceph.com/issues/54049.
- 06:41 AM Bug #56397 (Fix Under Review): client: `df` will show incorrect disk size if the quota size is no...
- 02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
- 02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
06/27/2022
- 06:23 PM Bug #54108 (Fix Under Review): qa: iogen workunit: "The following counters failed to be set on md...
06/24/2022
- 03:17 PM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- Regression introduced with https://github.com/ceph/ceph/pull/44900.
- 04:06 AM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894631
- 03:59 AM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894626
/a/yuriw-2022-06-... - 03:38 AM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894622
- 03:36 AM Bug #56384 (Resolved): ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- ...
- 10:12 AM Bug #55976 (Fix Under Review): mgr/volumes: Clone operations are failing with Assertion Error
- 09:16 AM Backport #56152: pacific: mgr/snap_schedule: schedule updates are not persisted across mgr restart
- Venky Shankar wrote:
> Milind, This is pacific only due to the usage of libsqlite in mainline vs in-memory+rados dum... - 09:11 AM Backport #56152: pacific: mgr/snap_schedule: schedule updates are not persisted across mgr restart
- Milind, This is pacific only due to the usage of libsqlite in mainline vs in-memory+rados dump in pacfic?
- 09:01 AM Bug #56012 (Fix Under Review): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_repla...
- 03:14 AM Bug #56380 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cf152a5bd5d340d5ee9fabea...- 03:10 AM Bug #56288 (Triaged): crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, cep...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=aeaa2b6c5a82bba2b2f33885...- 03:10 AM Bug #56282 (Duplicate): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e99e620a470c067176ebf0e...- 03:09 AM Bug #56270 (Duplicate): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Sna...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c00cdd2659181963c4fcc1ea...- 03:09 AM Bug #56269 (Resolved): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Snap...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d2aeadfccf541e27a05866ac...- 03:09 AM Bug #56263 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0b704d8eeaf9a29a1e49c16c...- 03:09 AM Bug #56261 (Triaged): crash: Migrator::import_notify_abort(CDir*, std::set<CDir*, std::less<CDir*...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ef0defbe852e18fecfcbe993...- 03:08 AM Bug #56249 (Resolved): crash: int Client::_do_remount(bool): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e023ce46f46b39b4a3c88a31...
06/23/2022
- 12:21 PM Feature #55715 (In Progress): pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
06/22/2022
- 06:40 PM Bug #23724 (Fix Under Review): qa: broad snapshot functionality testing across clients
- Does not include Ganesha.
- 06:39 PM Feature #55470 (Fix Under Review): qa: postgresql test suite workunit
- 12:16 PM Feature #55470 (In Progress): qa: postgresql test suite workunit
- 06:37 PM Bug #56169 (Fix Under Review): mgr/stats: 'perf stats' command shows incorrect output with non-ex...
- 01:49 PM Bug #56169 (Resolved): mgr/stats: 'perf stats' command shows incorrect output with non-existing m...
- When `ceph fs perf stats` command runs with non-existing mds_rank filter, it shows all the clients in `client_metadat...
- 02:54 PM Backport #56014: pacific: quota support for subvolumegroup
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46668
merged - 02:52 PM Backport #55802: pacific: mgr/volumes: allow users to add metadata (key-value pairs) for subvolum...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46515
merged - 11:27 AM Bug #56162 (Resolved): mgr/stats: add fs_name as field in perf stats command output
- fs_name needs to be added as a field with the change in structure of perf stats output.
- 10:11 AM Backport #56104 (In Progress): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' cr...
- 10:10 AM Backport #56103 (In Progress): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' cra...
- 10:03 AM Backport #56108 (In Progress): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'sna...
- 10:01 AM Backport #56107 (In Progress): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'sn...
- 06:44 AM Backport #56106 (In Progress): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown e...
- 06:43 AM Backport #56105 (In Progress): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown ...
- 05:33 AM Backport #56109 (In Progress): quincy: client: choose auth MDS for getxattr with the Xs caps
- 05:30 AM Backport #56110 (In Progress): pacific: client: choose auth MDS for getxattr with the Xs caps
- 05:22 AM Backport #55449 (In Progress): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cepha...
- 05:03 AM Backport #55661 (Resolved): quincy: qa: add test case for fsync crash issue
- 05:02 AM Backport #55658 (Resolved): quincy: mds: stuck 2 seconds and keeps retrying to find ino from auth...
- 01:46 AM Backport #56152 (Resolved): pacific: mgr/snap_schedule: schedule updates are not persisted across...
- https://github.com/ceph/ceph/pull/46797
scrub status does not reflect the correct status after mgr restart
eg...
06/21/2022
- 12:51 PM Bug #55516 (Resolved): qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data:...
- 12:50 PM Backport #55621 (Resolved): quincy: qa: fs suite tests failing with "json.decoder.JSONDecodeError...
- 10:18 AM Backport #55621 (In Progress): quincy: qa: fs suite tests failing with "json.decoder.JSONDecodeEr...
- 12:47 PM Backport #55916 (Resolved): pacific: qa: fs suite tests failing with "json.decoder.JSONDecodeErro...
- 10:16 AM Backport #55916 (In Progress): pacific: qa: fs suite tests failing with "json.decoder.JSONDecodeE...
- 12:44 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Tracker https://tracker.ceph.com/issues/54546 related to metadat... - 12:35 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Venky Shankar wrote:
> Tracker https://tracker.ceph.com/issues/54546 related to metadata corruption which is seen wh... - 10:46 AM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Tracker https://tracker.ceph.com/issues/54546 related to metadata corruption which is seen when running databases (es...
- 10:39 AM Feature #56140 (Pending Backport): cephfs: tooling to identify inode (metadata) corruption
- 10:42 AM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
- 10:42 AM Backport #55342 (Resolved): quincy: mds: try to reset heartbeat when fetching or committing.
- 10:42 AM Backport #55343 (Resolved): pacific: mds: try to reset heartbeat when fetching or committing.
- 10:40 AM Bug #55129 (Resolved): client: get stuck forever when the forward seq exceeds 256
- 10:40 AM Backport #55346 (Resolved): pacific: client: get stuck forever when the forward seq exceeds 256
- 10:39 AM Backport #55345 (Resolved): quincy: client: get stuck forever when the forward seq exceeds 256
- 10:37 AM Bug #16739 (Resolved): Client::setxattr always sends setxattr request to MDS
- 10:36 AM Backport #55192 (Resolved): pacific: Client::setxattr always sends setxattr request to MDS
- 10:36 AM Backport #55447 (Resolved): quincy: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm d...
- 10:11 AM Bug #54971 (Resolved): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics...
- 10:11 AM Backport #55338 (Resolved): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.te...
- 10:10 AM Bug #50033 (Resolved): mgr/stats: be resilient to offline MDS rank-0
- 10:08 AM Backport #54479 (Resolved): pacific: mgr/stats: be resilient to offline MDS rank-0
- 06:27 AM Bug #56116: mds: handle deferred client request core when mds reboot
- Venky Shankar wrote:
> Hi,
>
> Do you have a specific reproducer for this (in the form of a workload)?
>
> Che... - 05:34 AM Bug #56116: mds: handle deferred client request core when mds reboot
- Hi,
Do you have a specific reproducer for this (in the form of a workload)?
Cheers,
Venky - 04:13 AM Bug #56116 (Fix Under Review): mds: handle deferred client request core when mds reboot
06/20/2022
- 10:46 PM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- The counters mds.imported and mds.exported were not incremented during iogen workloads. I'm not sure why the exports ...
- 10:04 PM Bug #55807 (Duplicate): qa failure: workload iogen failed
- Duplicate of https://tracker.ceph.com/issues/54108
- 05:34 PM Bug #50546 (Won't Fix): nautilus: qa: 'The following counters failed to be set on mds daemons: {'...
- nautilus is EOL
- 12:37 PM Bug #56063 (Triaged): Snapshot retention config lost after mgr restart
- Milind, please take a look.
- 09:27 AM Backport #56114 (In Progress): quincy: data pool attached to a file system can be attached to ano...
- 04:36 AM Backport #56114 (Rejected): quincy: data pool attached to a file system can be attached to anothe...
- https://github.com/ceph/ceph/pull/46752
- 09:26 AM Backport #56113 (In Progress): pacific: data pool attached to a file system can be attached to an...
- 04:36 AM Backport #56113 (Rejected): pacific: data pool attached to a file system can be attached to anoth...
- https://github.com/ceph/ceph/pull/46751
- 09:25 AM Feature #56058 (Fix Under Review): mds/MDBalancer: add an arg to limit depth when dump loads for ...
- 09:14 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo, there is another class of failures for this test. See - https://p... - 04:06 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, there is another class of failures for this test. See - https://pulpito.ceph.com/vshan... - 03:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, there is another class of failures for this test. See - https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-...
- 08:50 AM Backport #55797 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) fo...
- 08:49 AM Feature #54472 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- 08:48 AM Bug #54375 (Resolved): mgr/volumes: The 'mode' argument is not honored on idempotent subvolume cr...
- 08:48 AM Backport #54574 (Resolved): quincy: mgr/volumes: The 'mode' argument is not honored on idempotent...
- 08:47 AM Backport #54573 (Resolved): pacific: mgr/volumes: The 'mode' argument is not honored on idempoten...
- 08:47 AM Bug #54049 (Resolved): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expect...
- 08:46 AM Backport #54478 (Resolved): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path...
- 08:45 AM Backport #54477 (Resolved): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path ...
- 08:43 AM Backport #55039 (Resolved): quincy: ceph-fuse: mount -a on already mounted folder should be ignored
- 08:42 AM Backport #55413 (Resolved): quincy: mds: add perf counter to record slow replies
- 08:42 AM Backport #55412 (Resolved): pacific: mds: add perf counter to record slow replies
- 08:40 AM Backport #55376 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) to...
- 08:27 AM Bug #56116 (Pending Backport): mds: handle deferred client request core when mds reboot
When mds reboot, client will send `mds_requests` and `client_reconnect` to mds.
If mds does not receive the `cli...- 04:33 AM Bug #54111 (Pending Backport): data pool attached to a file system can be attached to another fil...
- 04:30 AM Backport #56112 (Resolved): pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestRe...
- 04:30 AM Backport #56111 (Resolved): quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestRea...
- 04:30 AM Backport #56110 (Resolved): pacific: client: choose auth MDS for getxattr with the Xs caps
- https://github.com/ceph/ceph/pull/46799
- 04:30 AM Backport #56109 (Resolved): quincy: client: choose auth MDS for getxattr with the Xs caps
- https://github.com/ceph/ceph/pull/46800
- 04:30 AM Cleanup #3998 (Resolved): mds: split up mdstypes
- 04:27 AM Bug #55538 (Pending Backport): Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- 04:25 AM Backport #56108 (Resolved): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapsh...
- https://github.com/ceph/ceph/pull/46804
- 04:25 AM Backport #56107 (Resolved): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snaps...
- https://github.com/ceph/ceph/pull/46803
- 04:25 AM Backport #56106 (Resolved): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown erro...
- https://github.com/ceph/ceph/pull/46801
- 04:25 AM Backport #56105 (Resolved): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown err...
- https://github.com/ceph/ceph/pull/46802
- 04:25 AM Bug #55778 (Pending Backport): client: choose auth MDS for getxattr with the Xs caps
- 04:22 AM Bug #55824 (Pending Backport): ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- 04:21 AM Bug #55822 (Pending Backport): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot in...
- 04:21 AM Bug #55822 (Resolved): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
- 04:20 AM Backport #56104 (Resolved): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- https://github.com/ceph/ceph/pull/46806
- 04:20 AM Backport #56103 (Resolved): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- https://github.com/ceph/ceph/pull/46805
- 04:18 AM Bug #55759 (Pending Backport): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 04:17 AM Bug #56065 (Resolved): qa: TestMDSMetrics.test_delayed_metrics failure
- 02:14 AM Bug #56011 (Fix Under Review): fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
06/17/2022
- 08:31 PM Backport #55927: pacific: Unexpected file access behavior using ceph-fuse
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46596
merged - 08:31 PM Backport #55932: pacific: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*,...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46567
merged - 08:30 PM Backport #55338: pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45293
merged - 08:30 PM Backport #54479: pacific: mgr/stats: be resilient to offline MDS rank-0
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45293
merged
06/16/2022
- 09:22 PM Backport #55349: pacific: mgr/volumes: Show clone failure reason in clone status command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45928
merged - 02:31 PM Bug #56067: Cephfs data loss with root_squash enabled
- Odd. From the log:
> 2022-06-15T15:52:50.284+0000 7f26c9d35700 10 MDSAuthCap is_capable inode(path /npx/stress-tes... - 02:02 PM Bug #56067: Cephfs data loss with root_squash enabled
- Running sync after creating the file did not report errors.
- 01:46 PM Bug #56067: Cephfs data loss with root_squash enabled
- Try running `sync` after creating the file. It should report errors.
- 11:20 AM Backport #55375 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) t...
- merged.
- 10:26 AM Backport #54480 (Resolved): quincy: mgr/stats: be resilient to offline MDS rank-0
- 10:23 AM Backport #55337 (Resolved): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.tes...
- 10:18 AM Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
- Xiubo Li wrote:
> From https://pulpito.ceph.com/vshankar-2022-06-10_05:38:08-fs-wip-vshankar-testing1-20220607-10413... - 06:10 AM Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
From https://pulpito.ceph.com/vshankar-2022-06-10_05:38:08-fs-wip-vshankar-testing1-20220607-104134-testing-default...- 08:58 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
- /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881176
/a/yuriw-... - 06:33 AM Bug #55759 (Fix Under Review): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
06/15/2022
- 04:07 PM Bug #56067 (New): Cephfs data loss with root_squash enabled
- With root_squash client capability enabled, a file is created as a non-root user on one host, appears empty when read...
- 02:28 PM Backport #55335: pacific: Issue removing subvolume with retained snapshots - Possible quincy regr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46139
merged - 02:28 PM Backport #55412: pacific: mds: add perf counter to record slow replies
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46138
merged - 02:27 PM Backport #55384: pacific: mgr/snap_schedule: include timezone information in scheduled snapshots
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45968
merged - 02:25 PM Backport #55192: pacific: Client::setxattr always sends setxattr request to MDS
- Xiubo Li wrote:
> https://github.com/ceph/ceph/pull/45792
merged - 01:58 PM Bug #56065 (Fix Under Review): qa: TestMDSMetrics.test_delayed_metrics failure
- 01:42 PM Bug #56065 (Resolved): qa: TestMDSMetrics.test_delayed_metrics failure
- TestMDSMetrics.test_delayed_metrics fails with the following message:...
- 11:47 AM Bug #56063 (Closed): Snapshot retention config lost after mgr restart
- In https://tracker.ceph.com/issues/54052 the issue that scheduled snapshots are not longer created after a mgr restar...
- 08:55 AM Feature #56058: mds/MDBalancer: add an arg to limit depth when dump loads for dirfrags
- https://github.com/ceph/ceph/pull/46685
- 08:54 AM Feature #56058 (Pending Backport): mds/MDBalancer: add an arg to limit depth when dump loads for ...
- Directory hierarchy may be deep for a large filesystem, cmd dump loads would output
a lot and take a long time. So a... - 07:05 AM Backport #56056 (Resolved): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): asser...
- https://github.com/ceph/ceph/pull/46949
- 07:05 AM Backport #56055 (Resolved): quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert...
- https://github.com/ceph/ceph/pull/46948
- 07:03 AM Bug #54653 (Pending Backport): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag...
- 02:05 AM Bug #56011 (In Progress): fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
- 02:01 AM Backport #56015 (In Progress): quincy: crash just after MDS become active
- 02:00 AM Backport #56016 (In Progress): pacific: crash just after MDS become active
- 01:53 AM Backport #55994 (In Progress): quincy: client: switch to glibc's STATX macros
- 01:53 AM Backport #55993 (In Progress): pacific: client: switch to glibc's STATX macros
06/14/2022
- 04:00 PM Feature #56051 (New): cephfs-mirror should handle the case that files are deleted on the mirror d...
- If files are deleted on the mirror destination, they are not re-created and there is no notification that there are n...
- 03:56 PM Bug #56050 (New): cephfs-mirror strips ceph.dir.layout.pool_namespace xattr
- When mirroring with cephfs-mirror, the ceph.dir.layout.pool_namespace xattr is stripped.
We use this attribute to ... - 03:51 PM Feature #56049 (New): Allow one FS to be cephfs-mirror source and destination at the same time
- For us it would be convenient if a FS could be mirror source and destination at the same time. Then we wouldn't need ...
- 03:41 PM Bug #56048 (New): ceph.mirror.info is not removed from target FS when mirroring is disabled
- When disabling mirroring on a FS with "ceph fs snapshot mirror disable <source-fs>" the "ceph.mirror.info" xattr is n...
- 10:28 AM Backport #56014 (In Progress): pacific: quota support for subvolumegroup
- 09:19 AM Backport #56013 (In Progress): quincy: quota support for subvolumegroup
06/13/2022
- 06:56 PM Backport #54578 (In Progress): quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied ...
- 06:55 PM Backport #54577 (In Progress): pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied...
- 03:10 PM Bug #55842: Upgrading to 16.2.9 with 9M strays files causes MDS OOM
- Odd this happened only once active. Not sure on why.
- 02:44 PM Bug #56012: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- /ceph/teuthology-archive/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default...
- 12:46 PM Bug #56012 (Triaged): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- 06:15 AM Bug #56012: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- Another instance of the crash, but this time with plain vanilla subvolume - https://pulpito.ceph.com/vshankar-2022-06...
- 06:13 AM Bug #56012 (Resolved): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- Seen with fs:worklood - https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550...
- 02:10 PM Feature #55715 (New): pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
- 01:06 PM Backport #56004 (In Progress): quincy: LibRadosMiscConnectFailure.ConnectFailure test failure
- 01:05 PM Backport #56005 (In Progress): pacific: LibRadosMiscConnectFailure.ConnectFailure test failure
- 12:53 PM Bug #55825: cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRAD...
- Similar to - https://tracker.ceph.com/issues/52624 and https://tracker.ceph.com/issues/51282
- 12:49 PM Bug #56003 (Triaged): client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
- 12:48 PM Bug #56011 (Triaged): fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
- 04:05 AM Bug #56011 (Fix Under Review): fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
- https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/...
- 10:26 AM Bug #44100 (Resolved): cephfs rsync kworker high load.
- 10:26 AM Bug #44100: cephfs rsync kworker high load.
- Stefan Kooman wrote:
> The Patchwork link from @xiubo Li doesn't work for me. Has this been merged in upstream kerne... - 10:14 AM Bug #44100: cephfs rsync kworker high load.
- The Patchwork link from @xiubo Li doesn't work for me. Has this been merged in upstream kernel already?
- 10:16 AM Bug #56010 (Fix Under Review): xfstests-dev generic/444 test failed
- 06:34 AM Bug #56010: xfstests-dev generic/444 test failed
- It was introduce by recent fixing from [1].
When setting the *_system.posix_acl_default_* it will always do sync r... - 05:53 AM Bug #56010 (In Progress): xfstests-dev generic/444 test failed
- 05:52 AM Bug #56010: xfstests-dev generic/444 test failed
- Just sleep for 10 seconds the test passed:...
- 02:37 AM Bug #56010 (Resolved): xfstests-dev generic/444 test failed
- ...
- 08:16 AM Backport #56016 (Resolved): pacific: crash just after MDS become active
- https://github.com/ceph/ceph/pull/46682
- 08:16 AM Backport #56015 (Resolved): quincy: crash just after MDS become active
- https://github.com/ceph/ceph/pull/46681
- 08:10 AM Bug #53741 (Pending Backport): crash just after MDS become active
- 07:01 AM Backport #56014 (Resolved): pacific: quota support for subvolumegroup
- https://github.com/ceph/ceph/pull/46668
- 07:01 AM Backport #56013 (Resolved): quincy: quota support for subvolumegroup
- https://github.com/ceph/ceph/pull/46667
- 07:00 AM Fix #54317 (Resolved): qa: add testing in fs:workload for different kinds of subvolumes
- 06:57 AM Bug #53509 (Pending Backport): quota support for subvolumegroup
- 04:07 AM Bug #55822 (Fix Under Review): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot in...
- 03:53 AM Bug #55779: fuse client losing connection to mds
- Might be related to https://tracker.ceph.com/issues/56003
- 02:58 AM Bug #54546: mds: crash due to corrupt inode and omap entry
- Patrick, assigning this to you since you are making progress on this.
06/10/2022
- 06:20 PM Backport #55937 (In Progress): pacific: client: Inode::hold_caps_until should be a time from a mo...
- 05:25 PM Backport #56005 (Resolved): pacific: LibRadosMiscConnectFailure.ConnectFailure test failure
- https://github.com/ceph/ceph/pull/46626
- 05:25 PM Backport #56004 (Resolved): quincy: LibRadosMiscConnectFailure.ConnectFailure test failure
- https://github.com/ceph/ceph/pull/46563
- 05:22 PM Bug #55971 (Pending Backport): LibRadosMiscConnectFailure.ConnectFailure test failure
- 12:14 AM Bug #55971 (Fix Under Review): LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:21 PM Bug #56003: client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
- https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/...
- 05:15 PM Bug #56003 (Duplicate): client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
- https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/...
- 02:19 PM Backport #55629: pacific: cephfs-shell: saving files doesn't work as expected
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/46297
merged - 02:18 PM Backport #55427: pacific: unaccessible dentries after fsstress run with namespace-restricted caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46183
merged - 02:18 PM Backport #55343: pacific: mds: try to reset heartbeat when fetching or committing.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46180
merged - 02:17 PM Backport #55346: pacific: client: get stuck forever when the forward seq exceeds 256
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46179
merged - 02:15 PM Backport #55539: pacific: cephfs-top: multiple file system support
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46146
merged - 03:45 AM Backport #55994 (Resolved): quincy: client: switch to glibc's STATX macros
- https://github.com/ceph/ceph/pull/46680
- 03:45 AM Backport #55993 (Resolved): pacific: client: switch to glibc's STATX macros
- https://github.com/ceph/ceph/pull/46679
- 03:42 AM Bug #55253 (Pending Backport): client: switch to glibc's STATX macros
06/09/2022
- 07:08 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- I have opened a possible fix, but I don't think we can achieve the same equality with the `client_mount_timeout` valu...
- 06:12 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- This can be reproduced locally on the most up-to-date version of main with:...
- 03:46 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- What I've found is that the seconds unit expects only integers. So with the change from `float` -> `secs`, it is no l...
- 03:04 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- @Venky please reassign as needed
- 02:26 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- The unit for `client_mount_timeout` was changed from float to seconds in that commit, so the LibRadosMiscConnectFailu...
- 04:08 PM Bug #55980 (Fix Under Review): mds,qa: some balancer debug messages (<=5) not printed when debug_...
- 04:01 PM Bug #55980 (Pending Backport): mds,qa: some balancer debug messages (<=5) not printed when debug_...
- If debug_mds_balancer is the default, 1/5, then debug messages 1<lvl<=5 will not be printed to the log even when debu...
- 11:55 AM Bug #55976 (Pending Backport): mgr/volumes: Clone operations are failing with Assertion Error
- Clone operations are failing with Assertion Error.
When we create more clone in my case i have created 130[root@ceph... - 11:44 AM Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps...
- The actual failure for this workunit is:...
- 11:34 AM Bug #55804: qa failure: pjd link tests failed
- This failure is pretty much related to cephfs subvolumes. Recent test runs:...
- 11:11 AM Backport #55927 (In Progress): pacific: Unexpected file access behavior using ceph-fuse
- 11:09 AM Backport #55926 (In Progress): quincy: Unexpected file access behavior using ceph-fuse
- 09:59 AM Backport #55449: pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
- Xiubo, please post a backport for this.
06/08/2022
- 09:57 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- The Tracker reports a problem with `client_mount_timeout`. Here are all of the changes that were made to the Client c...
- 08:59 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- Ran some tests on recent main builds and pinpointed a good and bad commit. These tests go from newest main build to o...
- 05:35 PM Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure
- Laura, can you please triage this bug?
- 05:06 PM Bug #55971 (Resolved): LibRadosMiscConnectFailure.ConnectFailure test failure
- All rados_api_tests in the run failed due to the same reason. This points to a regression.
One of the recent runs (h... - 02:08 PM Backport #55797: quincy: mgr/volumes: allow users to add metadata (key-value pairs) for subvolume...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46508
merged - 12:53 PM Backport #55932 (In Progress): pacific: crash: void Server::set_trace_dist(ceph::ref_t<MClientRep...
- 12:51 PM Backport #55933 (In Progress): quincy: crash: void Server::set_trace_dist(ceph::ref_t<MClientRepl...
- 12:27 PM Bug #55897: test_nfs: update of export's access type should not trigger NFS service restart
- Ramana, PTAL.
- 11:53 AM Feature #55715 (In Progress): pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
- 11:51 AM Backport #55239 (Resolved): quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds...
- 10:59 AM Bug #55807 (Need More Info): qa failure: workload iogen failed
- 10:59 AM Bug #55807: qa failure: workload iogen failed
- Ramana, please check if this is same as https://tracker.ceph.com/issues/54108
- 07:38 AM Bug #55842: Upgrading to 16.2.9 with 9M strays files causes MDS OOM
- Patrick Donnelly wrote:
> Do you know what state the MDS was in (up:replay?) when its memory ballooned to 70G?
Ap... - 07:28 AM Backport #55936 (In Progress): quincy: client: Inode::hold_caps_until should be a time from a mon...
- 07:24 AM Backport #55936 (New): quincy: client: Inode::hold_caps_until should be a time from a monotonic c...
- 07:10 AM Backport #55936 (In Progress): quincy: client: Inode::hold_caps_until should be a time from a mon...
- 07:18 AM Backport #53761 (Resolved): pacific: mds: mds_oft_prefetch_dirfrags = false is not qa tested
- 07:08 AM Bug #55861 (Fix Under Review): Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_...
- 07:07 AM Bug #55861: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metrics.TestMDS...
- This failure was related to PR #46068.That's why it is handled there.
- 05:43 AM Bug #55824 (Fix Under Review): ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- 05:09 AM Bug #55824: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- The *_ec_* is enabled and the test will create and set the ec pool to the layout:...
- 04:42 AM Bug #55903 (Rejected): src/mds/MDLog.h: 247: FAILED ceph_assert(!segments.empty())
- 03:39 AM Feature #55940 (Pending Backport): quota: accept values in human readable format as well
- Quotas are set in bytes for cephfs subvolumes. This could be simpler if done in a human readable size like M, G or T ...
- 01:58 AM Backport #55934 (In Progress): quincy: client: infinite loop "got ESTALE" after mds recovery
- 01:52 AM Backport #55935 (In Progress): pacific: client: infinite loop "got ESTALE" after mds recovery
06/07/2022
- 06:52 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- I've identified the problematic clients as kernel client 5.18.0. Once the auth was removed for these clients the mds'...
- 05:31 PM Backport #55937 (Resolved): pacific: client: Inode::hold_caps_until should be a time from a monot...
- https://github.com/ceph/ceph/pull/46626
- 05:31 PM Backport #55936 (Resolved): quincy: client: Inode::hold_caps_until should be a time from a monoto...
- https://github.com/ceph/ceph/pull/46563
- 05:31 PM Backport #55935 (Resolved): pacific: client: infinite loop "got ESTALE" after mds recovery
- https://github.com/ceph/ceph/pull/46557
- 05:31 PM Backport #55934 (Resolved): quincy: client: infinite loop "got ESTALE" after mds recovery
- https://github.com/ceph/ceph/pull/46558
- 05:29 PM Bug #53504 (Pending Backport): client: infinite loop "got ESTALE" after mds recovery
- 05:27 PM Bug #52982 (Pending Backport): client: Inode::hold_caps_until should be a time from a monotonic c...
- 05:25 PM Backport #55933 (Resolved): quincy: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&...
- https://github.com/ceph/ceph/pull/46566
- 05:25 PM Backport #55932 (Resolved): pacific: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>...
- https://github.com/ceph/ceph/pull/46567
- 05:23 PM Backport #55931 (Resolved): pacific: client: allow overwrites to files with size greater than the...
- https://github.com/ceph/ceph/pull/47972
- 05:23 PM Backport #55930 (Resolved): quincy: client: allow overwrites to files with size greater than the ...
- https://github.com/ceph/ceph/pull/47971
- 05:21 PM Backport #55929 (Resolved): pacific: mds: FAILED ceph_assert(dir->get_projected_version() == dir-...
- https://github.com/ceph/ceph/pull/47180
- 05:21 PM Backport #55928 (Resolved): quincy: mds: FAILED ceph_assert(dir->get_projected_version() == dir->...
- https://github.com/ceph/ceph/pull/47181
- 05:20 PM Bug #54701 (Pending Backport): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CIn...
- 05:20 PM Backport #55927 (Resolved): pacific: Unexpected file access behavior using ceph-fuse
- https://github.com/ceph/ceph/pull/46596
- 05:20 PM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- /a/yuriw-2022-05-31_21:35:41-rados-wip-yuri2-testing-2022-05-31-1300-pacific-distro-default-smithi/6856451
- 05:20 PM Backport #55926 (Resolved): quincy: Unexpected file access behavior using ceph-fuse
- https://github.com/ceph/ceph/pull/46595
- 05:18 PM Bug #53597 (Pending Backport): mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_v...
- 05:15 PM Bug #24894 (Pending Backport): client: allow overwrites to files with size greater than the max_f...
- 05:15 PM Bug #55313 (Pending Backport): Unexpected file access behavior using ceph-fuse
- 04:40 PM Backport #55925 (New): quincy: ceph pacific fails to perform fs/multifs test
- 04:40 PM Backport #55924 (Rejected): pacific: ceph pacific fails to perform fs/multifs test
- 04:38 PM Bug #55620 (Pending Backport): ceph pacific fails to perform fs/multifs test
- 02:01 PM Bug #55842: Upgrading to 16.2.9 with 9M strays files causes MDS OOM
- Do you know what state the MDS was in (up:replay?) when its memory ballooned to 70G?
- 01:57 PM Backport #55658: quincy: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46497
merged - 01:57 PM Backport #55661: quincy: qa: add test case for fsync crash issue
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46496
merged - 01:54 PM Backport #55447: quincy: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46476
merged - 11:55 AM Backport #55916 (Resolved): pacific: qa: fs suite tests failing with "json.decoder.JSONDecodeErro...
- https://github.com/ceph/ceph/pull/46470
- 06:33 AM Bug #55903 (Resolved): src/mds/MDLog.h: 247: FAILED ceph_assert(!segments.empty())
- This is a bug from PR https://github.com/ceph/ceph/pull/45143, which is under testing, not merged yet.
- 05:18 AM Bug #55903: src/mds/MDLog.h: 247: FAILED ceph_assert(!segments.empty())
- There is an issue in a PR that was being tested. See - https://github.com/ceph/ceph/pull/45143#discussion_r890770082
- 01:36 AM Bug #55903: src/mds/MDLog.h: 247: FAILED ceph_assert(!segments.empty())
- Similar to https://tracker.ceph.com/issues/51278
- 12:30 AM Bug #55903 (Rejected): src/mds/MDLog.h: 247: FAILED ceph_assert(!segments.empty())
- From https://pulpito.ceph.com/jlayton-2022-06-06_19:24:37-fs-wip-vshankar-testing1-20220603-134300-distro-default-smi...
06/06/2022
- 03:48 PM Bug #55897 (New): test_nfs: update of export's access type should not trigger NFS service restart
- /a/yuriw-2022-06-03_20:44:47-rados-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/6862967...
- 12:59 PM Bug #55822 (Triaged): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
- 12:50 PM Bug #55842 (Triaged): Upgrading to 16.2.9 with 9M strays files causes MDS OOM
- 12:43 PM Bug #55858 (Triaged): Pacific 16.2.7 MDS constantly crashing
- 12:35 PM Bug #54107 (Resolved): kclient: hang during umount
- 06:11 AM Bug #51278: mds: "FAILED ceph_assert(!segments.empty())"
- Latest occurrence with similar backtrace - https://pulpito.ceph.com/vshankar-2022-06-03_10:03:27-fs-wip-vshankar-test...
- 05:55 AM Backport #55865 (Rejected): pacific: qa/cephfs: setting to sudo to True has no effect on _run_pyt...
- https://github.com/ceph/ceph/pull/54180
- 05:55 AM Backport #55864 (New): quincy: qa/cephfs: setting to sudo to True has no effect on _run_python()
- 05:55 AM Backport #55863 (Rejected): pacific: qa/cephfs: mon cap not properly tested in caps_helper.py
- https://github.com/ceph/ceph/pull/54182
- 05:55 AM Backport #55862 (New): quincy: qa/cephfs: mon cap not properly tested in caps_helper.py
- 05:53 AM Bug #55557 (Pending Backport): qa/cephfs: setting to sudo to True has no effect on _run_python()
- 05:51 AM Bug #55558 (Pending Backport): qa/cephfs: mon cap not properly tested in caps_helper.py
- 05:50 AM Bug #50010 (Resolved): qa/cephfs: get_key_from_keyfile() return None when key is not found in key...
- 05:22 AM Bug #55861 (Resolved): Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metr...
- Seen here - https://pulpito.ceph.com/vshankar-2022-06-03_10:03:27-fs-wip-vshankar-testing1-20220603-134300-testing-de...
Also available in: Atom