Activity
From 10/15/2017 to 11/13/2017
11/13/2017
- 01:56 PM Bug #22116: prometheus module 500 if 'deep' in pg states
- Related: https://github.com/ceph/ceph/pull/18903
- 10:57 AM Bug #22116 (Resolved): prometheus module 500 if 'deep' in pg states
- Ticket to track https://github.com/ceph/ceph/pull/18890
11/09/2017
- 03:12 PM Bug #22096: Authentication failed, did you specify a mgr ID with a valid keyring?
- ceph.conf - I've tried it with and without the [mgr] section - makes no difference....
- 12:57 PM Bug #22096: Authentication failed, did you specify a mgr ID with a valid keyring?
- Pretty odd. I'd look at your ceph.conf to see if there's anything bogus there
- 11:27 AM Bug #22096: Authentication failed, did you specify a mgr ID with a valid keyring?
- I've just run ceph-mgr with strace and it never tries to access or open any keyring files - nothing in /var/lib/ceph ...
- 10:44 AM Bug #22096: Authentication failed, did you specify a mgr ID with a valid keyring?
- I don't see anything wrong with that key information.
You could use a "strace ceph-mgr -i $HOST -d" to try and see... - 09:59 AM Bug #22096 (Resolved): Authentication failed, did you specify a mgr ID with a valid keyring?
- I have a test cluster which has been running fine with jewel, which I am trying to upgrade to luminous. (12.2.1, usin...
- 02:40 PM Bug #21518 (Resolved): mgr[zabbix] float division by zero
- 02:39 PM Backport #21648 (Resolved): luminous: mgr[zabbix] float division by zero
- 02:15 PM Bug #21904 (Pending Backport): mgr[zabbix] float division by zero (osd['kb'] = 0)
- 12:46 PM Bug #22070: "ceph pg dump" has stale states listed (322 PGs in states, only 320 PGs in cluster!)
- look like the ceph -s cmd handled by the mon and the num_pg_by_state is not correct. but i have not find why?
- 12:45 PM Dashboard Bug #22098 (Fix Under Review): dashboard failing to load audit log
- https://github.com/ceph/ceph/pull/18848
- 12:34 PM Dashboard Bug #22098 (Resolved): dashboard failing to load audit log
- You'd just see nothing when clicking on the audit tab on the health page.
This got broken when we made `log last` ... - 05:26 AM Backport #22075 (In Progress): luminous: mgr tests don't indicate failure if exception thrown fro...
- https://github.com/ceph/ceph/pull/18832
11/08/2017
- 10:25 AM Backport #22075 (Resolved): luminous: mgr tests don't indicate failure if exception thrown from s...
- 10:20 AM Bug #20629 (Resolved): Spurious ceph-mgr failovers during mon elections
- 10:20 AM Bug #21404 (Resolved): ceph-mgr gets process called "exe" after respawn
- 10:19 AM Backport #22035 (Resolved): luminous: Spurious ceph-mgr failovers during mon elections
- 10:19 AM Backport #21547 (Resolved): luminous: ceph-mgr gets process called "exe" after respawn
- 10:19 AM Bug #20950 (Resolved): key mismatch for mgr after upgrade from jewel to luminous(dev)
- 10:18 AM Backport #22034 (Resolved): luminous: key mismatch for mgr after upgrade from jewel to luminous(d...
- 10:03 AM Bug #21158 (Resolved): DaemonState members accessed outside of locks
- 10:02 AM Backport #21524 (Resolved): luminous: DaemonState members accessed outside of locks
- was included in https://github.com/ceph/ceph/pull/18675
- 08:12 AM Bug #21904: mgr[zabbix] float division by zero (osd['kb'] = 0)
- https://github.com/ceph/ceph/pull/18809
- 07:57 AM Bug #21904 (Fix Under Review): mgr[zabbix] float division by zero (osd['kb'] = 0)
- https://github.com/ceph/ceph/pull/18515/files
- 07:28 AM Bug #22070 (Can't reproduce): "ceph pg dump" has stale states listed (322 PGs in states, only 320...
- in ceph pg dump cmd, we can not find the scrubbing pg. like below:
it look like have two other pg than the total? ... - 06:39 AM Dashboard Bug #21572 (Resolved): dashboard OSD list has servers and osds in arbitrary order
- 06:39 AM Backport #21638 (Resolved): luminous: dashboard OSD list has servers and osds in arbitrary order
- 06:38 AM Bug #20568 (Resolved): the dashboard uses absolute links for filesystems and clients
- 06:38 AM Backport #21549 (Resolved): luminous: the dashboard uses absolute links for filesystems and clients
- 06:37 AM Dashboard Bug #21570 (Resolved): dashboard barfs on nulls where it expects numbers
- 06:37 AM Backport #22032 (Resolved): luminous: dashboard barfs on nulls where it expects numbers
11/06/2017
- 03:04 AM Bug #21957: ceph osd status ignores --format option
- `ceph osd status` command implemented by a ceph-mgr plugin called `status`. this plugin is written by Python language...
11/04/2017
- 11:37 PM Backport #21524 (In Progress): luminous: DaemonState members accessed outside of locks
- https://github.com/ceph/ceph/pull/18739
- 11:20 PM Backport #21547 (In Progress): luminous: ceph-mgr gets process called "exe" after respawn
- https://github.com/ceph/ceph/pull/18738
- 11:17 PM Backport #21549 (In Progress): luminous: the dashboard uses absolute links for filesystems and cl...
- https://github.com/ceph/ceph/pull/18737
- 11:14 PM Backport #21638 (In Progress): luminous: dashboard OSD list has servers and osds in arbitrary order
- https://github.com/ceph/ceph/pull/18736
- 08:49 PM Backport #21648 (In Progress): luminous: mgr[zabbix] float division by zero
- https://github.com/ceph/ceph/pull/18734
- 07:15 PM Backport #21656 (In Progress): luminous: crash on DaemonPerfCounters::update
- https://github.com/ceph/ceph/pull/18733
- 07:06 PM Backport #21875 (In Progress): luminous: ceph-mgr spuriously reloading OSD metadata on map changes
- https://github.com/ceph/ceph/pull/18732
- 07:01 PM Backport #22029 (In Progress): luminous: restarting active ceph-mgr cause glitches in bps and iop...
- https://github.com/ceph/ceph/pull/18735
- 04:55 PM Backport #22030 (In Progress): luminous: List of filesystems does not get refreshed after a files...
- https://github.com/ceph/ceph/pull/18730
- 09:17 AM Backport #22032 (In Progress): luminous: dashboard barfs on nulls where it expects numbers
- https://github.com/ceph/ceph/pull/18728
- 09:04 AM Backport #22034 (In Progress): luminous: key mismatch for mgr after upgrade from jewel to luminou...
- https://github.com/ceph/ceph/pull/18727
- 08:52 AM Backport #22035 (In Progress): luminous: Spurious ceph-mgr failovers during mon elections
- https://github.com/ceph/ceph/pull/18726
- 05:13 AM Backport #22023 (In Progress): luminous: mgr: dashboard plugin OSD daemons' table the Usage colum...
- https://github.com/ceph/ceph/pull/18723
- 04:36 AM Backport #21946 (In Progress): luminous: `fs status` always says 0 clients
- https://github.com/ceph/ceph/pull/18722
- 02:55 AM Bug #21999 (Pending Backport): mgr tests don't indicate failure if exception thrown from serve()
11/03/2017
- 03:50 PM Backport #22035 (Resolved): luminous: Spurious ceph-mgr failovers during mon elections
- https://github.com/ceph/ceph/pull/18726
- 03:50 PM Backport #22034 (Resolved): luminous: key mismatch for mgr after upgrade from jewel to luminous(d...
- https://github.com/ceph/ceph/pull/18727
- 03:50 PM Backport #22032 (Resolved): luminous: dashboard barfs on nulls where it expects numbers
- https://github.com/ceph/ceph/pull/18728
- 03:50 PM Backport #22030 (Resolved): luminous: List of filesystems does not get refreshed after a filesyst...
- 03:49 PM Backport #22029 (Resolved): luminous: restarting active ceph-mgr cause glitches in bps and iops m...
- https://github.com/ceph/ceph/pull/18735
- 03:49 PM Backport #22023 (Resolved): luminous: mgr: dashboard plugin OSD daemons' table the Usage column's...
- 12:11 PM Feature #17454 (Resolved): Don't force module classes to be called "Module"
11/02/2017
- 02:23 PM Feature #17461 (Resolved): Enable python modules to operate in standby mode
- https://github.com/ceph/ceph/pull/16651
+ backport to luminous via https://github.com/ceph/ceph/pull/18675 - 02:09 PM Feature #17452 (Resolved): Emit notifications on monmap updates
- This was fixed in:...
- 02:07 PM Feature #17457 (Closed): Port REST API tests from Calamari
- The rest module went away, so this ticket is no longer relevant.
- 02:03 PM Feature #17460 (Resolved): Enable python modules to advertise services
- NB this will also show up in Luminous via https://github.com/ceph/ceph/pull/18675
- 02:02 PM Bug #21367 (Can't reproduce): 'ZabbixSender' object has no attribute 'hostname'
- 02:01 PM Bug #21593 (Resolved): segv in PyList_New from PyFormatter
- NB this will also show up in Luminous via https://github.com/ceph/ceph/pull/18675
- 02:00 PM Dashboard Bug #21570 (Pending Backport): dashboard barfs on nulls where it expects numbers
- 02:00 PM Feature #21594 (Resolved): prometheus meta-series describing OSD<->disk mapping
- NB this will also show up in Luminous via https://github.com/ceph/ceph/pull/18675
- 10:37 AM Bug #20950 (Pending Backport): key mismatch for mgr after upgrade from jewel to luminous(dev)
11/01/2017
- 07:19 PM Bug #21122 (Closed): Kraken to luminous upgrade: Error EINVAL: key for mgr.vpm037 exists but cap ...
- This kind of looks like the mgr daemon with that name just already existed?
I'm going to close this as the bootstr... - 06:48 PM Bug #21891: ceph mgr stopping result in segfault
- 06:48 PM Bug #21891: ceph mgr stopping result in segfault
- The assertion in the dashboard module is related to that PR, but the actual segfault is not.
I suspect the segfaul... - 04:13 PM Bug #21999 (Fix Under Review): mgr tests don't indicate failure if exception thrown from serve()
- https://github.com/ceph/ceph/pull/18672
- 02:54 PM Bug #21999 (Resolved): mgr tests don't indicate failure if exception thrown from serve()
- 02:25 PM Bug #17737 (Resolved): Crash in get_metadata_python during MDS restart
- 02:25 PM Backport #21659 (Resolved): luminous: Crash in get_metadata_python during MDS restart
- 02:08 PM Bug #21981 (Pending Backport): mgr: dashboard plugin OSD daemons' table the Usage column's value ...
- 03:25 AM Bug #21981 (Fix Under Review): mgr: dashboard plugin OSD daemons' table the Usage column's value ...
- 01:39 PM Bug #21773 (Pending Backport): restarting active ceph-mgr cause glitches in bps and iops metrics
- 06:11 AM Bug #20629 (Pending Backport): Spurious ceph-mgr failovers during mon elections
10/31/2017
- 05:25 AM Bug #21981: mgr: dashboard plugin OSD daemons' table the Usage column's value is always zero.
- https://github.com/ceph/ceph/pull/18637
- 05:22 AM Bug #21981 (Resolved): mgr: dashboard plugin OSD daemons' table the Usage column's value is alway...
- The dashboard plugin failed to get the following counter:
2017-10-31 13:05:04.598 7f1744f90700 4 mgr get_counter...
10/30/2017
- 07:25 PM Bug #21599 (Pending Backport): List of filesystems does not get refreshed after a filesystem dele...
- 06:34 AM Bug #21891: ceph mgr stopping result in segfault
- fixed by https://github.com/ceph/ceph/pull/18182
- 02:43 AM Bug #21891 (In Progress): ceph mgr stopping result in segfault
10/27/2017
- 02:56 PM Bug #21957: ceph osd status ignores --format option
- This is on v12.2.1
- 02:56 PM Bug #21957 (New): ceph osd status ignores --format option
- There is an expectation that specifying the --format option to the "ceph" command will change the format of the outpu...
- 12:44 PM Backport #21946 (Resolved): luminous: `fs status` always says 0 clients
- https://github.com/ceph/ceph/pull/18722
- 02:40 AM Bug #21927 (Pending Backport): `fs status` always says 0 clients
- 12:45 AM Bug #21941 (Rejected): mgr systemd startup issue
- was mgr/keyring/dir permission issue
10/26/2017
- 11:41 PM Bug #21941: mgr systemd startup issue
- Checking the logs looks like probably something that needs to be fixed in ceph.py, checking further on that
2017-1... - 11:20 PM Bug #21941 (Rejected): mgr systemd startup issue
- 1) I am running a test that uses systed to start daemon's instead of teuthology's old way of daemon-helper
cluster... - 03:20 PM Bug #21707 (Fix Under Review): "osd status" command exception if OSD not in pgmap stats
- 03:06 PM Bug #21890 (Duplicate): ceph manager SIGABRT
- 02:36 AM Bug #21687: mgr: mark_down of osd without metadata is broken
- https://github.com/ceph/ceph/pull/18484
10/25/2017
- 01:48 PM Bug #21927 (Fix Under Review): `fs status` always says 0 clients
- https://github.com/ceph/ceph/pull/18537
- 01:38 PM Bug #21927 (Resolved): `fs status` always says 0 clients
- We didn't set a priority on the counter for sessions, so it's not getting sent to the mgr any more.
10/24/2017
- 05:27 PM Bug #21904 (Resolved): mgr[zabbix] float division by zero (osd['kb'] = 0)
- Issued ceph osd out 0 and zabbix monitoring stopped....
10/23/2017
- 09:24 AM Bug #21687: mgr: mark_down of osd without metadata is broken
- ...
- 08:01 AM Feature #17460 (Fix Under Review): Enable python modules to advertise services
- Part of https://github.com/ceph/ceph/pull/16651
- 08:00 AM Bug #21593 (Fix Under Review): segv in PyList_New from PyFormatter
- Should be fixed by https://github.com/ceph/ceph/pull/16651/commits/892e34edd169cd937aac71b2fde7edf6c0888c3d
10/22/2017
- 08:08 PM Bug #21890: ceph manager SIGABRT
- I have found reason -- I tried to fix #21886 and have changed...
- 07:30 PM Bug #21890: ceph manager SIGABRT
- by some reason, if I run exactly the same command line as systemd does,but with -f changed to -d, everything works......
- 07:14 PM Bug #21890: ceph manager SIGABRT
- ceph version 12.2.1
Debian GNU/Linux 9.2 - 07:12 PM Bug #21890 (Duplicate): ceph manager SIGABRT
- See log file.
- 08:04 PM Bug #21891 (New): ceph mgr stopping result in segfault
- ...
10/21/2017
10/20/2017
- 09:30 AM Backport #21875 (Resolved): luminous: ceph-mgr spuriously reloading OSD metadata on map changes
- https://github.com/ceph/ceph/pull/18732
- 01:17 AM Backport #21659 (Fix Under Review): luminous: Crash in get_metadata_python during MDS restart
- 01:09 AM Backport #21656 (Fix Under Review): luminous: crash on DaemonPerfCounters::update
10/19/2017
- 09:43 PM Fix #21292 (Resolved): Quieten scary RuntimeError from restful module on startup
- 09:40 PM Backport #21320 (Resolved): luminous: Quieten scary RuntimeError from restful module on startup
- 09:03 PM Bug #21159 (Pending Backport): ceph-mgr spuriously reloading OSD metadata on map changes
- 01:30 PM Bug #20950 (Fix Under Review): key mismatch for mgr after upgrade from jewel to luminous(dev)
- 01:30 PM Bug #20950: key mismatch for mgr after upgrade from jewel to luminous(dev)
- https://github.com/ceph/ceph/pull/18399
10/18/2017
10/17/2017
- 09:30 PM Backport #21699 (Resolved): luminous: mgr status module uses base 10 units
- 09:21 AM Bug #21773: restarting active ceph-mgr cause glitches in bps and iops metrics
- https://github.com/ceph/ceph/pull/18347
10/16/2017
- 04:14 PM Bug #21773 (Fix Under Review): restarting active ceph-mgr cause glitches in bps and iops metrics
- 02:54 PM Bug #21773: restarting active ceph-mgr cause glitches in bps and iops metrics
- https://github.com/ceph/ceph/pull/18330
Also available in: Atom