Activity
From 07/14/2017 to 08/12/2017
08/12/2017
- 01:44 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
- I have a testcase that I'm working on that has two threads, each with their own ceph_mount_info. If those threads end...
08/11/2017
08/10/2017
- 02:27 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- Ahh yeah, I remember seeing that in there a while back. I guess the danger is that we can end up instantiating an ino...
- 02:29 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- what worry me is comment in fuse_lowlevel.h...
- 12:27 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- Thanks, Dan! Jewel backport staged: https://github.com/ceph/ceph/pull/16963
- 12:26 PM Backport #20972 (In Progress): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log:...
- 12:25 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- h3. description
10.2.9 instroduces a regression where ceph-fuse will segfault at mount time because of an attempt ... - 11:52 AM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- Confirmed that 10.2.9 plus cbf18b1d80d214e4203e88637acf4b0a0a201ee7 does not segfault.
- 09:04 AM Backport #20972 (Resolved): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- https://github.com/ceph/ceph/pull/16963
- 12:24 PM Bug #18157 (Pending Backport): ceph-fuse segfaults on daemonize
- 09:42 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Could you also please add the luminous backport tag for this?
- 09:23 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/16959
- 02:08 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- lookupname is for following case:
directory /a /b have non-default quota
client A is writing /a/file
client ...
08/09/2017
- 04:39 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
- This seems to work...
- 10:45 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Thanks. You're right. Here's the trivial reproducer:...
- 08:46 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- enabling quota and writing to unlinked file can produce this easily. get_quota_root() uses dentry in dn_set if it has...
- 01:57 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- FUSE is the only caller of ->ll_lookup so a simpler fix might be to just change the mask field to 0 in the _lookup ca...
- 10:39 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- this slowness is due to limitation of fuse API. The attached patch is a workaround. (not 100% sure it doesn't break a...
08/08/2017
- 05:35 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- Partially resolved by: https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d
- 02:25 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- I'm only running the fuse client. I see the problem both on Jewel (10.2.9 servers + fuse client) and on Luminous RC ...
- 02:22 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- I tried on latest Luminous RC + 4.12 kernel client. I got about 7000 opens/second in two nodes read-write case.
did ... - 02:00 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Our user confirmed that without client-quota their job finishes quickly:...
- 01:46 PM Bug #20945 (Resolved): get_quota_root sends lookupname op for every buffered write
- We have a CAD use-case (hspice) which sees very slow buffered writes, apparently due to the quota code. (We haven't y...
08/07/2017
- 05:15 PM Feature #20885: add syntax for generating OSD/MDS auth caps for cephfs
- PR to master was https://github.com/ceph/ceph/pull/16761
- 03:33 PM Bug #20938 (New): CephFS: concurrent access to file from multiple nodes blocks for seconds
- When accessing the same file opened for read/write on multiple nodes via ceph-fuse, performance drops by about 3 orde...
08/05/2017
- 03:34 AM Bug #20852 (Resolved): hadoop on cephfs would report "Invalid argument" when mount on a sub direc...
- 03:33 AM Feature #20885 (Resolved): add syntax for generating OSD/MDS auth caps for cephfs
08/03/2017
- 09:13 PM Fix #20246 (Resolved): Make clog message on scrub errors friendlier.
- 09:11 PM Bug #20799 (Resolved): Races when multiple MDS boot at once
- 09:11 PM Bug #20806 (Resolved): kclient: fails to delete tree during thrashing w/ multimds
- 09:10 PM Bug #20892 (Resolved): qa: FS_DEGRADED spurious health warnings in some sub-suites
- 04:08 AM Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites
- https://github.com/ceph/ceph/pull/16772
- 04:05 AM Bug #20892 (Resolved): qa: FS_DEGRADED spurious health warnings in some sub-suites
- From: /ceph/teuthology-archive/pdonnell-2017-08-02_17:25:29-fs-wip-pdonnell-testing-20170802-distro-basic-smithi/1474...
- 09:09 PM Feature #20760 (Resolved): mds: add perf counters for all mds-to-mds messages
- 09:09 PM Bug #20889 (Resolved): qa: MDS_DAMAGED not whitelisted properly
- 08:36 PM Bug #20889: qa: MDS_DAMAGED not whitelisted properly
- https://github.com/ceph/ceph/pull/16768/
- 02:54 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert Lima wrote:
> Just upgraded the other 2 production clusters where the problem tends to happen frequently.
> ... - 02:48 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Just upgraded the other 2 production clusters where the problem tends to happen frequently.
Will watch from now on. - 02:44 AM Bug #20595 (Resolved): mds: export_pin should be included in `get subtrees` output
- 02:43 AM Bug #20731 (Resolved): "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrade:j...
08/02/2017
- 11:31 PM Bug #20889 (Resolved): qa: MDS_DAMAGED not whitelisted properly
- Due to d12c51ca9129213d53c25a00447af431083ad4c9, grep no longer whitelisted MDS_DAMAGED properly. qa/suites/fs/basic_...
- 03:43 PM Feature #20885 (Resolved): add syntax for generating OSD/MDS auth caps for cephfs
- Add a simpler method for generating MDS auth caps based on filesystem name.
https://bugzilla.redhat.com/show_bug.c... - 03:36 AM Feature #20760 (Fix Under Review): mds: add perf counters for all mds-to-mds messages
- https://github.com/ceph/ceph/pull/16743
08/01/2017
- 10:54 AM Feature #20607 (Resolved): MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
- 10:48 AM Backport #20026 (Resolved): kraken: cephfs: MDS became unresponsive when truncating a very large ...
- 07:20 AM Support #20788 (Closed): MDS report "failed to open ino 10007be02d9 err -61/0" and can not restar...
07/31/2017
- 10:42 PM Bug #20595 (Fix Under Review): mds: export_pin should be included in `get subtrees` output
- https://github.com/ceph/ceph/pull/16714
- 09:54 PM Feature #19230 (Resolved): Limit MDS deactivation to one at a time
- Mon enforces this since 2c08f58ee8353322a342ce043150aafc8dd9c381.
- 09:48 PM Bug #20731: "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrade:jewel-x-lumi...
- PR: https://github.com/ceph/ceph/pull/16713
- 08:57 PM Bug #20731 (In Progress): "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrad...
- Obviously this error is expected when restarting the MDS. We should whitelist the warning.
- 09:05 PM Subtask #20864: kill allow_multimds
- Removing allow_multimds seems reasonable. [Of course, the command should remain a deprecated no-op for deployment com...
- 06:05 PM Subtask #20864 (Resolved): kill allow_multimds
- At this point, allow_multimds is now the default. Under this proposal, its effect is exactly the same as setting max_...
- 10:44 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- One of our production clusters upgraded.
Next one scheduled for next Wednesday, August 2nd.
07/30/2017
- 05:30 AM Bug #20852: hadoop on cephfs would report "Invalid argument" when mount on a sub directory
- https://github.com/ceph/ceph/pull/16671
07/29/2017
- 10:46 AM Bug #20852 (Resolved): hadoop on cephfs would report "Invalid argument" when mount on a sub direc...
- we hava tested hadoop on cephfs and hbase on cephfs.
and we got following stack on hbase:
Failed to become active m... - 02:47 AM Feature #20851 (New): cephfs fuse support "secret" option
- we know that cephfs kernel state mount support shows the "secret",
example:...
07/28/2017
- 04:57 PM Bug #20805 (Resolved): qa: test_client_limits waiting for wrong health warning
- 04:57 PM Bug #20677 (Resolved): mds: abrt during migration
- 01:38 PM Bug #20806 (Fix Under Review): kclient: fails to delete tree during thrashing w/ multimds
- https://github.com/ceph/ceph/pull/16654
- 07:46 AM Bug #20806 (In Progress): kclient: fails to delete tree during thrashing w/ multimds
- it's caused by bug in "open inode by inode number" function
- 07:33 AM Support #20788: MDS report "failed to open ino 10007be02d9 err -61/0" and can not restart success
- now we have figure out the reason:
it's killed by docker when mds reach its memory limit
thanks for your help! - 07:11 AM Support #20788: MDS report "failed to open ino 10007be02d9 err -61/0" and can not restart success
- "failed to open ino" is a normal when mds is recovery. what do you mean "ceph can not restart"? mds crashed or mds hu...
- 06:16 AM Backport #20823 (Resolved): jewel: client::mkdirs not handle well when two clients send mkdir req...
- https://github.com/ceph/ceph/pull/20271
- 02:29 AM Bug #20566 (Resolved): "MDS health message (mds.0): Behind on trimming" in powercycle tests
- 12:30 AM Bug #20792: cephfs: ceph fs new is err when no default rbd pool
- Maybe is my version have problem,I check it.
thank you, John and sage.
07/27/2017
- 09:39 PM Bug #20805: qa: test_client_limits waiting for wrong health warning
- https://github.com/ceph/ceph/pull/16640
- 08:59 PM Bug #20805: qa: test_client_limits waiting for wrong health warning
- From: /ceph/teuthology-archive/pdonnell-2017-07-26_18:40:14-fs-wip-pdonnell-testing-20170725-distro-basic-smithi/1448...
- 08:58 PM Bug #20805 (Resolved): qa: test_client_limits waiting for wrong health warning
- ...
- 09:29 PM Bug #20806: kclient: fails to delete tree during thrashing w/ multimds
- Zheng, please take a look.
- 09:29 PM Bug #20806 (Resolved): kclient: fails to delete tree during thrashing w/ multimds
- ...
- 09:20 PM Support #20788: MDS report "failed to open ino 10007be02d9 err -61/0" and can not restart success
- 61 is ENODATA. Sounds like something broke in the cluster; you'll need to provide a timeline of events.
- 03:06 AM Support #20788 (Closed): MDS report "failed to open ino 10007be02d9 err -61/0" and can not restar...
- ceph version is v10.2.8
now my ceph can not restart
i have cephfs_metadata and cephfs_data pools
i can not reprodu... - 05:48 PM Bug #20799 (Fix Under Review): Races when multiple MDS boot at once
- 05:48 PM Bug #20799: Races when multiple MDS boot at once
- https://github.com/ceph/ceph/pull/16631
- 04:53 PM Bug #20799 (Resolved): Races when multiple MDS boot at once
- There is a race in MDSRank::starting_done() between MDCache::open_root() and MDLog::start_new_segment()
An MDS in ... - 04:15 PM Bug #20792 (Need More Info): cephfs: ceph fs new is err when no default rbd pool
- 04:15 PM Bug #20792: cephfs: ceph fs new is err when no default rbd pool
- What version did you see this on? Current master already skips pool id 0.
- 02:16 PM Bug #20792 (Fix Under Review): cephfs: ceph fs new is err when no default rbd pool
- https://github.com/ceph/ceph/pull/16626
- 01:05 PM Bug #20792 (Need More Info): cephfs: ceph fs new is err when no default rbd pool
- The default rbd pool is deleted in the new version.
link:...
07/26/2017
- 04:23 PM Bug #20761 (Resolved): fs status: KeyError in handle_fs_status
- Fixed by 71ea1716043843dd191830f0bcbcc4a88059a9c2.
07/24/2017
- 08:01 PM Bug #20761 (Resolved): fs status: KeyError in handle_fs_status
- ...
- 06:14 PM Feature #20760 (Resolved): mds: add perf counters for all mds-to-mds messages
- Idea here is to have a better idea of what the MDS are doing through external tools (graphs). Continuation of #19362.
07/23/2017
- 08:38 AM Feature #20752 (Resolved): cap message flag which indicates if client still has pending capsnap
- current mds code uses "(cap->issued() & CEPH_CAP_ANY_FILE_WR) == 0" to infer that client has no pending capsnap. ther...
07/21/2017
- 10:30 PM Bug #20594 (In Progress): mds: cache limits should be expressed in memory usage, not inode count
- 08:31 PM Bug #20682 (Resolved): qa: test_client_pin looking for wrong health warning string
- 08:31 PM Bug #20569 (Resolved): mds: don't mark dirty rstat on non-auth inode
- 08:29 PM Bug #20592 (Pending Backport): client::mkdirs not handle well when two clients send mkdir request...
- 08:28 PM Bug #20583 (Resolved): mds: improve wording when mds respawns due to mdsmap removal
- 08:27 PM Bug #20072 (Resolved): TestStrays.test_snapshot_remove doesn't handle head whiteout in pgls results
- 04:00 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Patrick Donnelly wrote:
> See this announcement: http://ceph.com/geen-categorie/v10-2-4-jewel-released/
Thank you... - 03:30 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert Lima wrote:
> Patrick Donnelly wrote:
> > Any update?
> Hey Patrick, I have upgrade one test cluster first,... - 11:52 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Patrick Donnelly wrote:
> Any update?
Hey Patrick, I have upgrade one test cluster first, but it keeps as HEALTH_WA... - 05:56 AM Feature #20607: MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
- Proposed doc fix: https://github.com/ceph/ceph/pull/16471
- 04:15 AM Bug #20735 (New): mds: stderr:gzip: /var/log/ceph/ceph-mds.f.log: file size changed while zipping
- Some logs are still being written to after the MDS is terminated. This only happens with valgrind and tasks/cfuse_wor...
07/20/2017
- 11:49 PM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Zheng, please take a look.
- 11:47 PM Bug #20118 (Duplicate): Test failure: test_ops_throttle (tasks.cephfs.test_strays.TestStrays)
- 11:15 PM Bug #20118: Test failure: test_ops_throttle (tasks.cephfs.test_strays.TestStrays)
- 11:44 PM Bug #16920: mds.inodes* perf counters sound like the number of inodes but they aren't
- 11:43 PM Bug #17837 (Resolved): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 11:40 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Any update?
- 11:34 PM Bug #8807 (Closed): multimds: kernel_untar_build.sh is failing to remove all files
- Haven't seen this with the latest multimds fixes (probably thanks to Zheng) and the test files are no longer availabl...
- 11:33 PM Bug #10542 (Resolved): ceph-fuse cap trimming fails with: mount: only root can use "--options" op...
- This is caught during startup now and causes ceph-fuse to fail unless --client-die-on-failed-remount=false is set. Ma...
- 11:28 PM Bug #11314 (In Progress): qa: MDS crashed and the runs hung without ever timing out
- 11:28 PM Bug #11314: qa: MDS crashed and the runs hung without ever timing out
- I added a DaemonWatchdog in the mds_thrash.py code that catches this kind of thing. We should pull it out into its ow...
- 11:25 PM Bug #11986 (Closed): logs changing during tarball generation at end of job
- Haven't seen this one recently. Closing.
- 11:17 PM Bug #19255 (Need More Info): qa: test_full_fclose failure
- John, do you have a test failure to point to?
- 11:16 PM Bug #19712: some kcephfs tests become very slow
- Any update on this one Zheng?
- 11:15 PM Bug #19812: client: not swapping directory caps efficiently leads to very slow create chains
- 09:51 PM Feature #20609: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
- Patrick Donnelly wrote:
> Douglas Fuller wrote:
> > (edited to add: this may be more automagic than we want since... - 09:17 PM Feature #20609: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
- Douglas Fuller wrote:
> Patrick Donnelly wrote:
> > Douglas Fuller wrote:
> > > Sure, but I don't want the user to... - 09:01 PM Feature #20609: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
- Patrick Donnelly wrote:
> Douglas Fuller wrote:
> > Sure, but I don't want the user to have to understand that dist... - 08:28 PM Feature #20609: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
- Douglas Fuller wrote:
> Patrick Donnelly wrote:
> > > I'd like to work "mds" into this command somehow to make it c... - 07:42 PM Feature #20609: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
- Patrick Donnelly wrote:
> > I'd like to work "mds" into this command somehow to make it clear to the user that they ... - 07:30 PM Feature #20609: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
- Douglas Fuller wrote:
> So really this would just set max_mds to 0. I do think this should trigger HEALTH_ERR unless... - 09:23 PM Bug #20731 (Resolved): "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrade:j...
- Run: http://pulpito.ceph.com/teuthology-2017-07-19_04:23:05-upgrade:jewel-x-luminous-distro-basic-smithi/
Jobs: 38
... - 08:27 PM Backport #20714 (Rejected): jewel: Adding pool with id smaller then existing data pool ids breaks...
- 07:57 PM Feature #20606: mds: improve usability of cluster rank manipulation and setting cluster up/down
- John Spray wrote:
> The last point about cluster down: looking at http://tracker.ceph.com/issues/20609, I'm not sure... - 07:47 PM Feature #20611: MDSMonitor: do not show cluster health warnings for file system intentionally mar...
- Douglas Fuller wrote:
> Taking an MDS down for hardware maintenance, etc, should trigger a health warning because su... - 07:44 PM Feature #20610: MDSMonitor: add new command to shrink the cluster in an automated way
- Dropping the old behavior for decreasing max_mds (i.e. do nothing but set it) is okay with me. BTW, there should be a...
- 07:12 PM Documentation #6771 (Closed): add mds configuration
- 12:39 PM Documentation #6771: add mds configuration
- This can probably be closed?
- 07:03 PM Bug #20334 (Resolved): I/O become slowly when multi mds which subtree root has replica
- Thanks for letting us know!
- 01:07 AM Bug #20334: I/O become slowly when multi mds which subtree root has replica
- v12.1.0 has solved the problem,please help close this issue,thank you!
- 12:13 AM Bug #20334 (Need More Info): I/O become slowly when multi mds which subtree root has replica
- 12:55 PM Feature #20607: MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
- I don't find deactivate so bad since this commands primarily deals with ranks (that just happened to be backed by a c...
- 12:39 AM Bug #20122 (Need More Info): Ceph MDS crash with assert failure
- A debug log from the MDS is necesary to diagnose this I think. See: http://docs.ceph.com/docs/giant/rados/troubleshoo...
- 12:12 AM Bug #20469 (Need More Info): Ceph Client can't access file and show '???'
- 12:12 AM Bug #20566 (Fix Under Review): "MDS health message (mds.0): Behind on trimming" in powercycle tests
- PR: https://github.com/ceph/ceph/pull/16435
- 12:09 AM Bug #20566 (In Progress): "MDS health message (mds.0): Behind on trimming" in powercycle tests
- 12:00 AM Bug #20569: mds: don't mark dirty rstat on non-auth inode
07/19/2017
- 11:56 PM Bug #20595 (In Progress): mds: export_pin should be included in `get subtrees` output
- 11:56 PM Bug #20614 (Duplicate): [WRN] MDS daemon 'a-s' is not responding, replacing it as rank 0 with ...
- 09:10 PM Bug #19890 (Resolved): src/test/pybind/test_cephfs.py fails
- 09:09 PM Backport #20500 (Resolved): kraken: src/test/pybind/test_cephfs.py fails
- 09:03 PM Bug #17939 (Resolved): non-local cephfs quota changes not visible until some IO is done
- 09:03 PM Backport #19763 (Resolved): kraken: non-local cephfs quota changes not visible until some IO is done
- 09:02 PM Fix #19708 (Resolved): Enable MDS to start when session ino info is corrupt
- 09:02 PM Backport #19710 (Resolved): kraken: Enable MDS to start when session ino info is corrupt
- 09:01 PM Backport #19680 (Resolved): kraken: MDS: damage reporting by ino number is useless
- 09:00 PM Bug #18757 (Resolved): Jewel ceph-fuse does not recover after lost connection to MDS
- 09:00 PM Backport #19678 (Resolved): kraken: Jewel ceph-fuse does not recover after lost connection to MDS
- 08:59 PM Bug #18914 (Resolved): cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client....
- 08:59 PM Backport #19676 (Resolved): kraken: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_v...
- 08:56 PM Bug #19033 (Resolved): cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file
- 08:56 PM Backport #19674 (Resolved): kraken: cephfs: mds is crushed, after I set about 400 64KB xattr kv p...
- 08:55 PM Bug #19204 (Resolved): MDS assert failed when shutting down
- 08:55 PM Backport #19672 (Resolved): kraken: MDS assert failed when shutting down
- 08:54 PM Bug #19401 (Resolved): MDS goes readonly writing backtrace for a file whose data pool has been re...
- 08:54 PM Backport #19669 (Resolved): kraken: MDS goes readonly writing backtrace for a file whose data poo...
- 08:53 PM Bug #19437 (Resolved): fs:The mount point break off when mds switch hanppened.
- 08:53 PM Backport #19667 (Resolved): kraken: fs:The mount point break off when mds switch hanppened.
- 08:05 PM Bug #19501 (Resolved): C_MDSInternalNoop::complete doesn't free itself
- 08:05 PM Backport #19664 (Resolved): kraken: C_MDSInternalNoop::complete doesn't free itself
- 08:04 PM Bug #18872 (Resolved): write to cephfs mount hangs, ceph-fuse and kernel
- 08:04 PM Backport #19845 (Resolved): kraken: write to cephfs mount hangs, ceph-fuse and kernel
- 02:08 PM Bug #20122: Ceph MDS crash with assert failure
- Thank you for your help. We've had several occurrences of this same issue since. This isn't something easily replicat...
- 01:43 PM Bug #19635 (Resolved): Deadlock on two ceph-fuse clients accessing the same file
- 01:43 PM Backport #20028 (Resolved): kraken: Deadlock on two ceph-fuse clients accessing the same file
- 07:40 AM Bug #20681 (Need More Info): kclient: umount target is busy
- 'sudo umount /home/ubuntu/cephtest/mnt.0' failed, but 'sudo umount /home/ubuntu/cephtest/mnt.0 -f' succeeded.
It's... - 06:38 AM Bug #20677 (Fix Under Review): mds: abrt during migration
- https://github.com/ceph/ceph/pull/16410/commits/58623d781da1189d2e88cf4875294353db78cea9
- 04:07 AM Bug #20677 (In Progress): mds: abrt during migration
- 03:47 AM Bug #20622 (Closed): mds: takeover mds stuck in up:replay after thrashing rank 0
- 02:34 AM Bug #20569: mds: don't mark dirty rstat on non-auth inode
- New PR here: https://github.com/ceph/ceph/pull/16337
07/18/2017
- 11:06 PM Bug #20682 (Resolved): qa: test_client_pin looking for wrong health warning string
- ...
- 10:46 PM Bug #20681 (Closed): kclient: umount target is busy
- ...
- 09:30 PM Bug #20677: mds: abrt during migration
- Zheng: note this happened when the thrasher deactivated a rank.
- 09:28 PM Bug #20677: mds: abrt during migration
- Zheng, please take a look.
- 09:28 PM Bug #20677 (Resolved): mds: abrt during migration
- ...
- 09:24 PM Bug #20452: Adding pool with id smaller then existing data pool ids breaks MDSMap::is_data_pool
- Kraken is soon-to-be-EOL, yes.
- 08:32 PM Bug #20452: Adding pool with id smaller then existing data pool ids breaks MDSMap::is_data_pool
- Kraken is EOL right? Just jewel I think.
- 08:26 PM Bug #20452: Adding pool with id smaller then existing data pool ids breaks MDSMap::is_data_pool
- @Patrick - backport to which stable versions?
- 04:29 PM Bug #20452 (Pending Backport): Adding pool with id smaller then existing data pool ids breaks MDS...
- 04:25 PM Bug #20452 (Resolved): Adding pool with id smaller then existing data pool ids breaks MDSMap::is_...
- 08:34 PM Bug #20055: Journaler may execute on_safe contexts prematurely
- Originally slated for jewel backport, but this was reconsidered. The jewel backport tracker was http://tracker.ceph.c...
- 04:33 PM Bug #20582 (Resolved): common: config showing ints as floats
- 04:32 PM Bug #20537 (Resolved): mds: MDLog.cc: 276: FAILED assert(!capped)
- 04:32 PM Bug #20440 (Resolved): mds: mds/journal.cc: 1559: FAILED assert(inotablev == mds->inotable->get_v...
- 04:24 PM Bug #20441 (Resolved): mds: failure during data scan
- 02:22 PM Bug #20441 (Closed): mds: failure during data scan
- 04:23 PM Feature #10792 (Resolved): qa: enable thrasher for MDS cluster size (vary max_mds)
- 01:29 PM Bug #20659: MDSMonitor: assertion failure if two mds report same health warning
- No it isn't more recent. Thanks!
- 10:56 AM Bug #20659 (Resolved): MDSMonitor: assertion failure if two mds report same health warning
- Unless this test run was more recent than the fix, I think this is https://github.com/ceph/ceph/pull/16302
- 05:22 AM Bug #20659 (Resolved): MDSMonitor: assertion failure if two mds report same health warning
- ...
- 11:20 AM Feature #20606: mds: improve usability of cluster rank manipulation and setting cluster up/down
- The last point about cluster down: looking at http://tracker.ceph.com/issues/20609, I'm not sure what the higher leve...
07/17/2017
- 10:24 PM Feature #20606: mds: improve usability of cluster rank manipulation and setting cluster up/down
- John Spray wrote:
> My thoughts on this:
>
> * maybe we should preface this class of command (manipulating the MD... - 09:31 PM Feature #19109 (Fix Under Review): Use data pool's 'df' for statfs instead of global stats, if th...
- https://github.com/ceph/ceph/pull/16378
- 02:11 PM Bug #20566: "MDS health message (mds.0): Behind on trimming" in powercycle tests
- it's transient warning caused by backfill. I think we should add this warning to whitelist
- 01:52 PM Bug #20594: mds: cache limits should be expressed in memory usage, not inode count
- See #4504 and associated MemoryModel tickets.
- 01:48 PM Bug #20593 (Fix Under Review): mds: the number of inode showed by "mds perf dump" not correct aft...
07/14/2017
- 06:30 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- ok i'll download them just in case.
- 06:01 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- It will expire in a week or two.
- 04:01 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Patrick Donnelly wrote:
> https://shaman.ceph.com/builds/ceph/i20535-backport-v10.2.9/
Thanks, I'll be upgrading ... - 11:19 AM Feature #20606: mds: improve usability of cluster rank manipulation and setting cluster up/down
- My thoughts on this:
* maybe we should preface this class of command (manipulating the MDS ranks) with "cluster", ... - 04:11 AM Bug #20622: mds: takeover mds stuck in up:replay after thrashing rank 0
- 3 of 6 osds failed. But there is no clue why they failed....
- 12:19 AM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- You'd give the available space for that pool (i.e. how many bytes can they write before it becomes full). Same as in...
Also available in: Atom