General

Profile

Noah Watkins

Issues

Projects

Activity

12/04/2019

07:02 PM Ceph Revision 3baebbea (ceph): qa/standalone/ceph-helpers.sh: fix mgr module path
callers of get_python_path were not passing in a $1 parameter, so
ceph_lib was an empty string resulting in an invali...
06:19 PM Ceph Revision bf3a1a29 (ceph): qa/standalone/ceph-helpers.sh: fix mgr module path
callers of get_python_path were not passing in a $1 parameter, so
ceph_lib was an empty string resulting in an invali...

08/12/2019

09:08 PM mgr Bug #39955: After upgrade to Nautilus 14.2.1 mon DB is growing too fast when state of cluster is ...
While we don't track much data in the insights module, it looks like the issue here is that the state managed by the ...

07/22/2019

07:50 PM mgr Bug #40871: osd status reports old crush location after osd moves
https://bugzilla.redhat.com/show_bug.cgi?id=1724428
07:48 PM mgr Bug #40871 (Resolved): osd status reports old crush location after osd moves
Scenario:
Move an OSD disk from host=worker1 to a new node (host=worker0) and on that new node we update the crush...

06/21/2019

08:42 PM mgr Bug #39955: After upgrade to Nautilus 14.2.1 mon DB is growing too fast when state of cluster is ...
Thanks Josh. I'll do a quick audit with this info.
06:38 PM mgr Bug #39955: After upgrade to Nautilus 14.2.1 mon DB is growing too fast when state of cluster is ...
Oh, you mean log messages like debug/info etc... I was thinking you meant that with _any_ communication with the moni...
06:09 PM mgr Bug #39955: After upgrade to Nautilus 14.2.1 mon DB is growing too fast when state of cluster is ...
Josh, can you speak a little more about the types of messages related to `logm` keys? It's conceivable that some issu...

06/17/2019

05:37 PM mgr Bug #39955: After upgrade to Nautilus 14.2.1 mon DB is growing too fast when state of cluster is ...
Hi,
It is conceivable that a bug in insights could cause a huge number of keys to be created, but I'm not seeing a...

06/12/2019

10:26 PM mgr Bug #40011: ceph -s shows wrong number of pools when pool was deleted
It looks to me like `ceph status` is getting this state not from the ceph-mgr but from the MgrStatMonitor PaxosServic...

Also available in: Atom