General

Profile

Issues

Filters

Apply Clear

# Project Tracker Status Priority Subject Assignee Updated Category Target version Tags
48805CephFSBugPending BackportUrgentmds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"Milind Changire11/10/2021 10:41 PMCeph - v17.0.0
52260CephFSBugNewUrgent 1 MDSs are read only | pacific 16.2.5Milind Changire09/17/2021 09:39 AMfsck/damage handling
50250CephFSBugNewUrgentmds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")Milind Changire07/06/2021 06:52 PMCeph - v17.0.0
50387CephFSBugTriagedUrgentclient: fs/snaps failureMilind Changire04/20/2021 09:00 PMCeph - v17.0.0
45834CephFSBugTriagedUrgentcephadm: "fs volume create cephfs" overwrites existing placement specificationMilind Changire02/25/2021 09:56 AMAdministration/UsabilityCeph - v17.0.0
48562CephFSBugTriagedUrgentqa: scrub - object missing on disk; some files may be lostMilind Changire01/15/2021 10:44 PMCeph - v17.0.0
44274CephFSFeatureNewHighmds: disconnect file data from inode numberMilind Changire11/26/2021 11:54 AMPerformance/Resource Usage
48812CephFSBugNewHighqa: test_scrub_pause_and_resume_with_abort failureMilind Changire11/25/2021 05:41 AMCeph - v17.0.0
52626CephFSBugTriagedHighmds: ScrubStack.cc: 831: FAILED ceph_assert(diri)Milind Changire09/16/2021 02:27 AMCeph - v17.0.0
51197CephFSBugTriagedHighqa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for detailsMilind Changire06/14/2021 01:42 PMCeph - v17.0.0
48680CephFSBugNewHighmds: scrubbing stuck "scrub active (0 inodes in the stack)"Milind Changire01/15/2021 10:44 PMCeph - v17.0.0
53760CephFSBackportNewNormalpacific: snap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properlyMilind Changire01/10/2022 06:41 AM
53611CephFSBugTriagedNormalmds,client: can not identify pool id if pool name is positive integer when set layout.poolMilind Changire01/10/2022 05:51 AMCorrectness/SafetyCeph - v17.0.0
52642CephFSBugPending BackportNormalsnap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properlyMilind Changire01/04/2022 09:10 AMCeph - v17.0.0
53558CephFSDocumentationNewNormalDocument cephfs recursive accountingMilind Changire12/13/2021 01:40 PMAdministration/Usability
51824CephFSBugTriagedNormalpacific scrub ~mds_dir causes stray related ceph_assert, abort and OOMMilind Changire11/03/2021 10:42 AM
52916CephFSFixIn ProgressNormalmds,client: formally remove inline data supportMilind Changire10/18/2021 01:44 PMCeph - v17.0.0
52641CephFSBugTriagedNormalsnap scheduler: Traceback seen when snapshot schedule remove command is passed without required parametersMilind Changire09/20/2021 01:42 PM
51459CephFSDocumentationNewNormaldoc: document what kinds of damage forward scrub can repairMilind Changire06/30/2021 10:46 PMCeph - v17.0.0
16745CephFSFeatureNewNormalmon: prevent allocating snapids allocated for CephFSMilind Changire04/26/2021 02:40 PMCeph - v17.0.0
20597CephFSBugNewNormalmds: tree exports should be reported at a higher debug levelMilind Changire04/21/2021 03:39 PMAdministration/UsabilityCeph - v17.0.0
50252CephFSBackportNeed More InfoNormaloctopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"Milind Changire04/11/2021 07:27 PM
50238CephFSBugNewNormalmds: ceph.dir.rctime for older snaps is erroneously updatedMilind Changire04/08/2021 02:07 PMCorrectness/SafetyCeph - v17.0.0
48953CephFSFeatureNeed More InfoNormalcephfs-mirror: suppport snapshot mirror of subdirectories and/or ancestors of a mirrored directoryMilind Changire02/04/2021 03:36 PMAdministration/UsabilityCeph - v17.0.0
46885CephFSFixNewNormalpybind/mgr/mds_autoscaler: add test for MDS scaling with cephadmMilind Changire01/15/2021 10:46 PMCeph - v17.0.0
(1-25/29) Per page: 25, 50

Also available in: Atom CSV PDF