Activity
From 10/15/2019 to 11/13/2019
11/13/2019
- 10:44 PM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- /a/trociny-2019-10-15_07:49:13-rbd-master-distro-basic-smithi/4414497/
- 10:32 PM Bug #42503 (Closed): There are a lot of OSD downturns on this node. After PG is redistributed, a ...
- Yes, sometimes CRUSH selection fails when you have a very small number of choices compared to the number of required ...
- 10:27 PM Bug #42529 (Closed): memory bloat + OSD process crash
- 10:26 PM Bug #42577 (Rejected): acting_recovery_backfill won't catch all up peers
- Xie, feel free to reopen it with more explanation, if you still think this is a problem.
- 10:07 PM Backport #42242 (Resolved): nautilus: Adding Placement Group id in Large omap log message
- 08:31 PM Backport #42547: nautilus: verify_upmaps can not cancel invalid upmap_items in some cases
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30899
merged - 08:30 PM Backport #42126: nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30899
merged - 08:25 PM Backport #41238: nautilus: Implement mon_memory_target
- Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/30419
merged - 08:21 PM Backport #42326: nautilus: max_size from crushmap ignored when increasing size on pool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30941
merged - 08:20 PM Feature #41359: Adding Placement Group id in Large omap log message
- https://github.com/ceph/ceph/pull/30923 merged
- 08:13 PM Backport #40840: nautilus: Explicitly requested repair of an inconsistent PG cannot be scheduled ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29748
merged - 08:09 PM Backport #41917: nautilus: osd: failure result of do_osd_ops not logged in prepare_transaction fu...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30546
merged - 08:09 PM Backport #39517: nautilus: Improvements to standalone tests.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30528
merged - 08:09 PM Backport #42014: nautilus: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30528
merged - 08:07 PM Backport #41921: nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30486
merged - 08:07 PM Backport #41862: nautilus: Mimic MONs have slow/long running ops
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30480
merged - 08:06 PM Backport #41712: nautilus: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30371
merged - 08:06 PM Backport #41640: nautilus: FAILED ceph_assert(info.history.same_interval_since != 0) in PG::start...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30280
merged - 07:14 PM Bug #42783: test failure: due to client closed connection
- Marking this high, since this is showing up a lot on nautilus.
- 03:32 AM Bug #42783: test failure: due to client closed connection
- the connections were closed by client repeatly. i wonder if it's expected: we have "msgr-failures/fastclose.yaml". an...
- 03:29 AM Bug #42783 (Resolved): test failure: due to client closed connection
- on client side:...
- 04:45 PM Backport #41350: nautilus: hidden corei7 requirement in binary packages
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29772
merged - 01:46 PM Bug #42782: nautilus: rados/test_librados_build.sh build failure
- Note: when octopus is split off from master, it will start having this same problem.
- 01:46 PM Bug #42782: nautilus: rados/test_librados_build.sh build failure
- mimic and luminous already have this fix. It was backported from master before the nautilus release.
- 01:26 PM Bug #42782 (Fix Under Review): nautilus: rados/test_librados_build.sh build failure
- 12:41 PM Backport #42798 (In Progress): mimic: unnecessary error message "calc_pg_upmaps failed to build o...
- 12:25 PM Backport #42798 (Resolved): mimic: unnecessary error message "calc_pg_upmaps failed to build over...
- https://github.com/ceph/ceph/pull/31957
- 12:41 PM Backport #42797 (In Progress): nautilus: unnecessary error message "calc_pg_upmaps failed to buil...
- 12:25 PM Backport #42797 (Resolved): nautilus: unnecessary error message "calc_pg_upmaps failed to build o...
- https://github.com/ceph/ceph/pull/31956
- 12:40 PM Backport #42796 (In Progress): luminous: unnecessary error message "calc_pg_upmaps failed to buil...
- 12:24 PM Backport #42796 (Resolved): luminous: unnecessary error message "calc_pg_upmaps failed to build o...
- https://github.com/ceph/ceph/pull/31598
- 12:26 PM Bug #41680 (Resolved): Removed OSDs with outstanding peer failure reports crash the monitor
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:18 PM Backport #41695: nautilus: Network ping monitoring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30195
m... - 03:33 AM Backport #41695 (Resolved): nautilus: Network ping monitoring
- 12:17 PM Backport #42152 (Resolved): nautilus: Removed OSDs with outstanding peer failure reports crash th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30904
m... - 07:59 AM Bug #42387: ceph_test_admin_socket_output fails in rados qa suite
- It's only the bench command that causes the issue....
- 03:39 AM Feature #40640 (Resolved): Network ping monitoring
- 03:39 AM Backport #41697: luminous: Network ping monitoring
- Backporting this requires https://github.com/ceph/ceph/pull/31277
- 03:37 AM Backport #41696: mimic: Network ping monitoring
- Backporting this requires https://github.com/ceph/ceph/pull/31275 from https://tracker.ceph.com/issues/42570
11/12/2019
- 11:40 PM Backport #41695: nautilus: Network ping monitoring
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30195
merged - 11:40 PM Backport #42152: nautilus: Removed OSDs with outstanding peer failure reports crash the monitor
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30904
merged - 11:32 PM Bug #42782 (Resolved): nautilus: rados/test_librados_build.sh build failure
- ...
- 08:22 PM Bug #42780 (Resolved): recursive lock of OpTracker::lock (70)
- I was testing 2 cephfs clients vs. a vstart cluster and the osd crashed....
- 06:24 PM Bug #42756 (Pending Backport): unnecessary error message "calc_pg_upmaps failed to build overfull...
- 04:31 PM Bug #41362 (Resolved): Rados bench sequential and random read: not behaving as expected when op s...
- 07:06 AM Feature #41666 (Pending Backport): Issue a HEALTH_WARN when a Pool is configured with [min_]size ...
- PR #31416 - https://github.com/ceph/ceph/pull/31416 is now merged in to master.
- 03:37 AM Bug #42387 (New): ceph_test_admin_socket_output fails in rados qa suite
- ...
11/11/2019
- 11:00 PM Feature #14865: Permit cache eviction of watched object
- See [1] for an abandoned PR
[1] http://tracker.ceph.com/issues/14865 - 10:08 PM Support #42584 (Closed): MGR error: auth: could not find secret_id=<number>
- I'm closing this as I think it got addressed on the mailing list?
- 02:39 PM Support #42584: MGR error: auth: could not find secret_id=<number>
- This error message is *not* only written in active MGR log but in specific OSD logs, too.
- 09:45 PM Bug #42756 (Fix Under Review): unnecessary error message "calc_pg_upmaps failed to build overfull...
- 08:25 PM Bug #42756 (Resolved): unnecessary error message "calc_pg_upmaps failed to build overfull/underfull"
- After enabling ceph-mgr module balancer in upmap mode, we can see in ceph-mgr logs messages like:
-1 calc_pg_upmaps ... - 08:00 PM Bug #41191 (Resolved): osd: scrub error on big objects; make bluestore refuse to start on big obj...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:58 PM Bug #41936 (Resolved): scrub errors after quick split/merge cycle
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:48 PM Bug #42501 (Fix Under Review): format error: ceph osd stat --format=json
- 02:24 PM Bug #42742 (Resolved): "failing miserably..." in Infiniband.cc
- lockdep should be initialized before creating any mutex.
as RDMA is always enabled when building ceph. and global ... - 12:52 PM Backport #41958 (Resolved): nautilus: scrub errors after quick split/merge cycle
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30643
m... - 12:52 PM Backport #42095: nautilus: global osd crash in DynamicPerfStats::add_to_reports
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30648
m... - 12:51 PM Backport #41920 (Resolved): nautilus: osd: scrub error on big objects; make bluestore refuse to s...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30783
m... - 12:36 PM Backport #42739 (Resolved): nautilus: scrub object count mismatch on device_health_metrics pool
- https://github.com/ceph/ceph/pull/31735
11/09/2019
- 09:39 PM Bug #41383 (Pending Backport): scrub object count mismatch on device_health_metrics pool
- 01:22 AM Bug #42718 (Resolved): Improve OSDMap::calc_pg_upmaps() efficiency
We should eliminate the rules based pool sets being passed to calc_pg_upmaps()
Also, osdmaptool --upmap should b...
11/08/2019
- 09:50 PM Bug #42716 (Resolved): Pool creation error message is hidden on FileStore-backed pools
- When trying to create a pool with an incorrect PG number then the error message is hidden by a warning message.
os... - 05:48 PM Bug #38345: mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup
- ...
- 04:11 PM Bug #42175: _txc_add_transaction error (2) No such file or directory not handled on operation 15
Seen in luminous for final point release:
http://pulpito.ceph.com/yuriw-2019-11-08_02:53:57-rados-wip-yuri8-test...- 02:54 PM Bug #42706 (Can't reproduce): LibRadosList.EnumerateObjectsSplit fails
- ...
- 01:16 PM Bug #41891 (Resolved): global osd crash in DynamicPerfStats::add_to_reports
- 01:16 PM Backport #42095 (Resolved): nautilus: global osd crash in DynamicPerfStats::add_to_reports
- 03:59 AM Bug #42689 (Duplicate): nautilus mon/mgr: ceph status:pool number display is not right
- When I create a pool, and then I remove it. The pool number display is not right in ceph status dumpinfo.
!pool_info...
11/07/2019
- 10:48 PM Bug #42511 (Resolved): ceph-daemon fails when selinux is enabled
- 10:35 PM Bug #41383 (Fix Under Review): scrub object count mismatch on device_health_metrics pool
- exercise an abundance of caution!
- 10:04 PM Backport #42095: nautilus: global osd crash in DynamicPerfStats::add_to_reports
- Mykola Golub wrote:
> https://github.com/ceph/ceph/pull/30648
merged - 10:04 PM Backport #41920: nautilus: osd: scrub error on big objects; make bluestore refuse to start on big...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30783
merged - 09:11 PM Bug #24531: Mimic MONs have slow/long running ops
- Seeing the same on 14.2.4
- 06:31 PM Bug #42668: ceph daemon osd.* fails in osd container but ceph daemon mds.* does not fail in mds c...
- Hey Ben, I'm wondering where those extra monitor address args are coming from? Is there a local ceph.conf in the cont...
- 05:25 PM Backport #42547 (In Progress): nautilus: verify_upmaps can not cancel invalid upmap_items in some...
- 11:48 AM Bug #40403 (Resolved): osd: rollforward may need to mark pglog dirty
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:46 AM Bug #41429 (Resolved): Incorrect logical operator in Monitor::handle_auth_request()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:45 AM Bug #42079 (Resolved): weird daemon key seen in health alert
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:37 AM Backport #42548 (Resolved): luminous: verify_upmaps can not cancel invalid upmap_items in some cases
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31234
m... - 11:35 AM Backport #42200 (Resolved): nautilus: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31031
m... - 11:35 AM Backport #40504 (Resolved): nautilus: osd: rollforward may need to mark pglog dirty
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31034
m... - 11:34 AM Backport #41548 (Resolved): nautilus: monc: send_command to specific down mon breaks other mon msgs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31037
m... - 11:34 AM Backport #41705 (Resolved): nautilus: Incorrect logical operator in Monitor::handle_auth_request()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31038
m... - 11:34 AM Backport #42125 (Resolved): nautilus: weird daemon key seen in health alert
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31039
m... - 11:33 AM Bug #42455 (Resolved): nautilus: "ceph smart" should not require write access
- 11:15 AM Bug #36515: config options: 'services' field is empty for many config options
- This went in for nautilus:...
- 10:48 AM Backport #42662 (Need More Info): nautilus:Issue a HEALTH_WARN when a Pool is configured with [mi...
- The master PR https://github.com/ceph/ceph/pull/31416 is still open. Please do not attempt a backport until the maste...
- 05:36 AM Bug #41313: PG distribution completely messed up since Nautilus
- There definitely is something wrong.
After every kind of rebalance, no matter if caused by OSD removal/adding, or no... - 12:56 AM Feature #42659 (Fix Under Review): add a health_warn when mon_osd_report_timeout <= mon_osd_repor...
11/06/2019
- 11:36 PM Bug #42668 (Won't Fix): ceph daemon osd.* fails in osd container but ceph daemon mds.* does not f...
- with K8S (RHOCS 4.2) OSD pods, I get this error from Ceph daemon command:
[bengland@bene-laptop ocs-operator]$ oco... - 09:08 PM Bug #42666: mgropen from mgr comes from unknown.$id instead of mgr.$id
- I suspect this is caused by the rados or libcephfs module reusing the rados instance? ...
- 09:07 PM Bug #42666 (Duplicate): mgropen from mgr comes from unknown.$id instead of mgr.$id
- This works fine when the mgr is first restarted...
- 08:20 PM Backport #42200: nautilus: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31031
merged - 08:20 PM Backport #40504: nautilus: osd: rollforward may need to mark pglog dirty
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31034
merged - 08:19 PM Backport #41548: nautilus: monc: send_command to specific down mon breaks other mon msgs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31037
merged - 08:19 PM Backport #41705: nautilus: Incorrect logical operator in Monitor::handle_auth_request()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31038
merged - 08:18 PM Backport #42125: nautilus: weird daemon key seen in health alert
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31039
merged - 08:18 PM Bug #42455: nautilus: "ceph smart" should not require write access
- https://github.com/ceph/ceph/pull/31111 merged
- 08:04 PM Backport #41785 (In Progress): nautilus: Make dumping of reservation info congruent between scrub...
- 06:27 PM Bug #42570 (Resolved): mgr: qa: upgrade mimic-master "src/osd/osd_types.h: 2313: FAILED ceph_asse...
I put in the backports, but they were already completed without creating backport trackers.
Nautilus: Separate f...- 11:51 AM Backport #42662 (Resolved): nautilus:Issue a HEALTH_WARN when a Pool is configured with [min_]siz...
- https://github.com/ceph/ceph/pull/31842
- 07:40 AM Bug #42477 (Fix Under Review): Rados should use the '-o outfile' convention
- 01:37 AM Feature #42659 (Duplicate): add a health_warn when mon_osd_report_timeout <= mon_osd_report_timeout
- when mon_osd_report_timeout <= osd_beacon_report_interval, mon maynot receive osd'beacon before mon_osd_report_timeou...
11/05/2019
- 08:44 PM Backport #42548: luminous: verify_upmaps can not cancel invalid upmap_items in some cases
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31234
merged - 08:39 PM Bug #20952: Glitchy monitor quorum causes spurious test failure
- Seen in final point release for Luminous:
http://pulpito.ceph.com/yuriw-2019-11-05_00:10:49-rados-wip-yuri5-testin... - 03:05 PM Feature #41666 (Fix Under Review): Issue a HEALTH_WARN when a Pool is configured with [min_]size ...
- PR https://github.com/ceph/ceph/pull/31416 addresses this issue.
- 01:43 PM Bug #42577: acting_recovery_backfill won't catch all up peers
- #35924 got backported to luminous, so luminous backport seems OK
- 01:18 PM Bug #39546 (Resolved): Warning about past_interval bounds on deleting pg
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:14 PM Bug #41721 (Resolved): TestClsRbd.sparsify fails when using filestore
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:13 PM Bug #42579 (Resolved): luminous p2p tests fail due to missing python3-cephfs package
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:07 PM Backport #42580 (Resolved): luminous: p2p tests fail due to missing python3-cephfs package
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31278
m... - 01:06 PM Backport #41864 (Resolved): luminous: Mimic MONs have slow/long running ops
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30519
m... - 01:06 PM Backport #42037 (Resolved): luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30896
m... - 01:06 PM Backport #42199 (Resolved): luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31030
m... - 01:01 PM Backport #42198 (Resolved): mimic: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31029
m... - 01:01 PM Backport #40503 (Resolved): mimic: osd: rollforward may need to mark pglog dirty
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31035
m... - 01:01 PM Backport #42394 (Resolved): mimic: CephContext::CephContextServiceThread might pause for 5 second...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31096
m... - 01:00 PM Backport #42582 (Resolved): mimic: luminous p2p tests fail due to missing python3-cephfs package
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31285
m... - 12:52 PM Backport #41764 (Resolved): nautilus: TestClsRbd.sparsify fails when using filestore
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30354
m... - 12:50 PM Backport #41503 (Resolved): nautilus: Warning about past_interval bounds on deleting pg
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30000
m... - 12:50 PM Backport #41583 (Resolved): nautilus: backfill_toofull seen on cluster where the most full OSD is...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29999
m... - 12:50 PM Backport #41501 (Resolved): nautilus: backfill_toofull while OSDs are not full (Unneccessary HEAL...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29999
m... - 12:49 PM Backport #41596 (Resolved): nautilus: ceph-objectstore-tool can't remove head with bad snapset
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30080
m... - 12:25 PM Bug #24531: Mimic MONs have slow/long running ops
- The same on Nautilus 14.2.3
- 04:47 AM Feature #42638 (Resolved): Allow specifying pg_autoscale_mode when creating a new pool
- pg_autoscaler is enabled by default in Octopus, but we can't disable the feature when creating a new pool.
This migh...
11/04/2019
- 09:53 PM Feature #38029 (Resolved): [RFE] If the nodeep-scrub/noscrub flags are set in pools instead of gl...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:52 PM Bug #38282 (Resolved): cephtool/test.sh failure in test_mon_osd_pool_set
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:52 PM Feature #38617 (Resolved): osd: Better error message when OSD count is less than osd_pool_default...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:49 PM Bug #40835 (Resolved): OSDCap.PoolClassRNS test aborts
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #41217 (Resolved): mon: C_AckMarkedDown has not handled the Callback Arguments
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #41250 (Resolved): osd/PrimaryLogPG: Access destroyed references in finish_degraded_object
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 PM Bug #41253 (Resolved): "CMake Error" in test_envlibrados_for_rocksdb.sh
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:25 PM Bug #42570: mgr: qa: upgrade mimic-master "src/osd/osd_types.h: 2313: FAILED ceph_assert(pos <= e...
- https://github.com/ceph/ceph/pull/31275 merged
- 09:01 PM Backport #41764: nautilus: TestClsRbd.sparsify fails when using filestore
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30354
merged - 08:51 PM Backport #41503: nautilus: Warning about past_interval bounds on deleting pg
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30000
merged - 08:50 PM Backport #41583: nautilus: backfill_toofull seen on cluster where the most full OSD is at 1%
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29999
merged - 08:50 PM Backport #41501: nautilus: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29999
merged - 08:48 PM Backport #41596: nautilus: ceph-objectstore-tool can't remove head with bad snapset
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30080
merged - 08:43 PM Backport #42580: luminous: p2p tests fail due to missing python3-cephfs package
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/31278
merged - 08:40 PM Backport #42580: luminous: p2p tests fail due to missing python3-cephfs package
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/31278
merged - 08:35 PM Backport #42198: mimic: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31029
merged - 08:34 PM Backport #40503: mimic: osd: rollforward may need to mark pglog dirty
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31035
merged - 08:34 PM Backport #42394: mimic: CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31096
merged - 08:31 PM Backport #42582: mimic: luminous p2p tests fail due to missing python3-cephfs package
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31285
merged - 08:29 PM Backport #41864: luminous: Mimic MONs have slow/long running ops
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30519
merged - 08:28 PM Backport #42037: luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30896
merged - 08:27 PM Backport #42199: luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31030
merged - 05:59 PM Backport #41448 (Resolved): nautilus: osd/PrimaryLogPG: Access destroyed references in finish_deg...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29994
m... - 05:59 PM Backport #40084 (Resolved): nautilus: osd: Better error message when OSD count is less than osd_p...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29992
m... - 05:59 PM Backport #39700 (Resolved): nautilus: [RFE] If the nodeep-scrub/noscrub flags are set in pools in...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29991
m... - 05:58 PM Backport #39682 (Resolved): nautilus: filestore pre-split may not split enough directories
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29988
m... - 05:58 PM Backport #41341 (Resolved): nautilus: "CMake Error" in test_envlibrados_for_rocksdb.sh
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29979
m... - 05:58 PM Backport #41453 (Resolved): nautilus: mon: C_AckMarkedDown has not handled the Callback Arguments
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29997
m... - 05:58 PM Backport #41491 (Resolved): nautilus: OSDCap.PoolClassRNS test aborts
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29998
m... - 05:19 PM Bug #20188: filestore: os/filestore/FileStore.h: 357: FAILED assert(q.empty()) from ceph_test_obj...
- /a/yuriw-2019-11-02_14:48:27-rados-wip-yuri-testing-2019-11-01-1917-luminous-distro-basic-smithi/4467375
11/02/2019
- 03:25 PM Backport #42558 (Resolved): mimic: cephtool/test.sh failure in test_mon_osd_pool_set
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31236
m...
11/01/2019
- 10:35 PM Bug #41834 (Resolved): qa: EC Pool configuration and slow op warnings for OSDs caused by recent m...
- 08:28 PM Bug #42597 (New): mon and mds ok-to-stop commands should validate input names exist to prevent mi...
- "ceph osd ok-to-stop" accepts only integers, "any", and "all". However, the "mon" and "mds" versions accept any strin...
- 07:12 PM Backport #41448: nautilus: osd/PrimaryLogPG: Access destroyed references in finish_degraded_object
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29994
merged - 07:11 PM Backport #40084: nautilus: osd: Better error message when OSD count is less than osd_pool_default...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29992
merged - 07:09 PM Backport #39700: nautilus: [RFE] If the nodeep-scrub/noscrub flags are set in pools instead of gl...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29991
merged - 07:05 PM Backport #39682: nautilus: filestore pre-split may not split enough directories
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29988
merged - 07:04 PM Backport #41341: nautilus: "CMake Error" in test_envlibrados_for_rocksdb.sh
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29979
merged - 07:03 PM Backport #41453: nautilus: mon: C_AckMarkedDown has not handled the Callback Arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29997
merged - 07:03 PM Backport #41491: nautilus: OSDCap.PoolClassRNS test aborts
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29998
merged - 04:57 PM Bug #42566 (Resolved): mgr commands fail when using non-client auth
- i don't think this needs to be backported.
- 04:08 PM Bug #42511 (Fix Under Review): ceph-daemon fails when selinux is enabled
- 12:16 PM Bug #42511: ceph-daemon fails when selinux is enabled
- Boris Ranto wrote:
> What is this used/needed for? Having :z for /dev is not a great idea. Relabelling devices for u... - 09:31 AM Bug #42499 (Resolved): test_ceph_daemon.sh fails
- 05:59 AM Feature #42593 (New): set ms_bind_before_connect option true by default
- Option ms_bind_before_connect exists before nautilus. It is used to bind message ip to target osd.
It's useful for d... - 03:03 AM Bug #42592 (Duplicate): ceph-mon/mgr PGstat Segmentation Fault
- Ceph version: nautilus 14.2.4
3 nodes cluster is used for cephfs file system storage.
when I run the scripts file a...
10/31/2019
- 10:51 PM Bug #42590 (New): Thrasher can set full ratio but no yaml whitelists for (OSD_OUT_OF_ORDER_FULL)
We have 2 choices:
Add (OSD_OUT_OF_ORDER_FULL) to the appropriate yaml files.
Or simpler yet set all values b...- 06:10 PM Backport #42259 (In Progress): nautilus: document new option mon_max_pg_per_osd
- ceph-backport.sh version 15.0.0.6612: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 04:40 PM Backport #42558: mimic: cephtool/test.sh failure in test_mon_osd_pool_set
- David Zafman wrote:
> https://github.com/ceph/ceph/pull/31236
merged - 02:30 PM Backport #42558 (In Progress): mimic: cephtool/test.sh failure in test_mon_osd_pool_set
- 02:31 PM Bug #38282 (Pending Backport): cephtool/test.sh failure in test_mon_osd_pool_set
- 02:30 PM Bug #38282 (Resolved): cephtool/test.sh failure in test_mon_osd_pool_set
- mimic backport: https://github.com/ceph/ceph/pull/31236
- 01:14 PM Support #42584 (Closed): MGR error: auth: could not find secret_id=<number>
- Hi,
I have noticed multiple errors in MGR log:
2019-10-31 14:06:31.623 7ff9ecd62700 0 auth: could not find secret_... - 11:14 AM Backport #42582 (In Progress): mimic: luminous p2p tests fail due to missing python3-cephfs package
- ceph-backport.sh version 15.0.0.6270: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 11:10 AM Backport #42582 (Resolved): mimic: luminous p2p tests fail due to missing python3-cephfs package
- https://github.com/ceph/ceph/pull/31285
- 08:54 AM Bug #42511: ceph-daemon fails when selinux is enabled
- What is this used/needed for? Having :z for /dev is not a great idea. Relabelling devices for use in containers in th...
- 05:11 AM Bug #42411 (Fix Under Review): nautilus:osd: network numa affinity not supporting subnet port
- 03:36 AM Backport #42580 (In Progress): luminous: p2p tests fail due to missing python3-cephfs package
- 03:33 AM Backport #42580 (Resolved): luminous: p2p tests fail due to missing python3-cephfs package
- https://github.com/ceph/ceph/pull/31278
- 03:32 AM Bug #42579 (Resolved): luminous p2p tests fail due to missing python3-cephfs package
- ...
- 12:53 AM Bug #42577 (Rejected): acting_recovery_backfill won't catch all up peers
- see https://github.com/ceph/ceph/pull/24035, due to the "want size <= pool size" constraint,
we'll now start to exc...
10/30/2019
- 10:41 PM Bug #42518 (Duplicate): rados man page is badly out of sync with actual usage
- 10:41 PM Bug #42012: mon osd_snap keys grow unbounded
- Sage says he wants to confirm we're not leaving behind more keys than we need in master
- 10:25 PM Bug #42511: ceph-daemon fails when selinux is enabled
- *** revert https://github.com/ceph/ceph/pull/31269 when this is fixed ***
- 02:52 PM Bug #42511: ceph-daemon fails when selinux is enabled
- this also fails,...
- 09:46 PM Bug #42570 (Fix Under Review): mgr: qa: upgrade mimic-master "src/osd/osd_types.h: 2313: FAILED c...
- reverting in mimic for now. the nautilus PR https://github.com/ceph/ceph/pull/30195 is similarly broken but hasn't m...
- 09:38 PM Bug #42570: mgr: qa: upgrade mimic-master "src/osd/osd_types.h: 2313: FAILED ceph_assert(pos <= e...
- db84d9ea8f3d1d46ba4cc3116aea052e8554261d from -pr 30951- [1] is the problem. it adds ping times, which are at osd_st...
- 07:48 PM Bug #42570 (Resolved): mgr: qa: upgrade mimic-master "src/osd/osd_types.h: 2313: FAILED ceph_asse...
- ...
- 03:24 PM Bug #42566 (Fix Under Review): mgr commands fail when using non-client auth
- 03:22 PM Bug #42566 (Resolved): mgr commands fail when using non-client auth
- e.g., 'ceph -n mon. -k /var/lib/ceph/mon/ceph-a/keyring pg ls' will fail.
root cause is the DaemonServer condition... - 07:03 AM Bug #42529: memory bloat + OSD process crash
- Close.
Cause: wrong memory target setting - 01:34 AM Backport #42558 (Resolved): mimic: cephtool/test.sh failure in test_mon_osd_pool_set
- https://github.com/ceph/ceph/pull/31236
- 01:32 AM Bug #38282 (Pending Backport): cephtool/test.sh failure in test_mon_osd_pool_set
10/29/2019
- 10:24 PM Backport #42548 (In Progress): luminous: verify_upmaps can not cancel invalid upmap_items in some...
- ceph-backport.sh version 15.0.0.6270: attempting to link this Backport tracker issue with GitHub PR https://github.co...
- 09:43 PM Backport #42548 (Resolved): luminous: verify_upmaps can not cancel invalid upmap_items in some cases
- https://github.com/ceph/ceph/pull/31234
NOTE: reverted by https://github.com/ceph/ceph/pull/32019 - 09:42 PM Backport #42547 (Resolved): nautilus: verify_upmaps can not cancel invalid upmap_items in some cases
- https://github.com/ceph/ceph/pull/30899
NOTE: reverted by https://github.com/ceph/ceph/pull/32018 - 09:42 PM Backport #42546 (Rejected): mimic: verify_upmaps can not cancel invalid upmap_items in some cases
- 09:38 PM Backport #41696 (Resolved): mimic: Network ping monitoring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30225
m... - 04:35 PM Backport #41696: mimic: Network ping monitoring
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30225
merged - 09:33 PM Backport #41697 (Resolved): luminous: Network ping monitoring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30230
m... - 09:30 PM Backport #41919 (Resolved): luminous: osd: scrub error on big objects; make bluestore refuse to s...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30785
m... - 09:29 PM Backport #42138 (Resolved): luminous: Remove unused full and nearful output from OSDMap summary
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30902
m... - 01:14 PM Bug #42346: Nearfull warnings are incorrect
- I dont expect to see nearfull warnings if they are not over threshold.
Commands run: I see this every time there is ... - 01:10 PM Bug #42529: memory bloat + OSD process crash
- some mempool info of an affected OSD;...
- 01:05 PM Bug #42529 (Closed): memory bloat + OSD process crash
- Seeing OSD processes using up to 30G Ram. 7.2k 10TB HDDs. Affects multiple OSDs on multiple hosts. (related http://li...
- 10:52 AM Bug #42486 (Resolved): Missing 'type' file and unable to infer osd type error
- Issue is fixed on rebase.
- 07:44 AM Bug #40583: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold
- I am taking the liberty to add a couple of recent mailing list threads here that highlight a potentially unintended c...
- 02:16 AM Bug #42519 (New): During deployment of the ceph,when the main node starts slower than the other n...
- During deployment of the ceph, the main MON node starts slowly, and the other two nodes start first and complete the ...
10/28/2019
- 11:11 PM Bug #42477: Rados should use the '-o outfile' convention
- Christian Huebner wrote:
> According to the manpage for the rados tool, -o denotes the output file name:
Oops, th... - 04:40 PM Bug #42477: Rados should use the '-o outfile' convention
- According to the manpage for the rados tool, -o denotes the output file name:
NAME
rados - rados object s... - 11:05 PM Bug #42518 (Duplicate): rados man page is badly out of sync with actual usage
- 09:35 PM Support #42449: Flushing cache pool will take months
- Thanks for the info about target dirty - I will take a look.
I am following the official documentation - which does ... - 09:06 PM Support #42449: Flushing cache pool will take months
- You'll get better help with cache tiers on the mailing list or irc than here in the tracker. :)
I think you want t... - 02:31 PM Bug #42511 (Resolved): ceph-daemon fails when selinux is enabled
- if you setenforce 0, everything is great. otherwise, however, you get an error like...
- 01:35 PM Bug #39039: mon connection reset, command not resent
- ...
- 01:27 PM Bug #42347: nautilus assert during osd shutdown: FAILED ceph_assert((sharded_in_flight_list.back(...
- Seeing 3 clusters hitting this on 14.2.2 via telemetry.
- 08:25 AM Bug #41834 (Fix Under Review): qa: EC Pool configuration and slow op warnings for OSDs caused by ...
- 04:29 AM Bug #42499 (Fix Under Review): test_ceph_daemon.sh fails
- 02:17 AM Bug #42503 (Closed): There are a lot of OSD downturns on this node. After PG is redistributed, a ...
- The phenomenon is a three node ceph environment with 2 + 1 redundancy configuration. Because of our own OSD cache fun...
10/27/2019
- 03:57 PM Bug #41946 (Duplicate): cbt perf test fails due to leftover in /home/ubuntu/cephtest
- 02:20 PM Bug #42485 (Pending Backport): verify_upmaps can not cancel invalid upmap_items in some cases
- 08:08 AM Bug #42501: format error: ceph osd stat --format=json
- master branch commit head: bf09a04d2275de
"ceph osd stat" output right data:
$ ceph osd stat
3 osds: 3 up (since... - 08:05 AM Bug #42501 (Resolved): format error: ceph osd stat --format=json
- -bash-4.2$ ceph osd stat
3 osds: 3 up (since 3d), 3 in (since 3d); epoch: e38
-bash-4.2$
-bash-4.2$ ceph osd stat... - 01:38 AM Bug #42225 (Resolved): target_max_bytes and target_max_objects should accept values in [M,G,T]iB ...
- 12:57 AM Bug #42500 (New): test_fuse.sh fails
- ...
- 12:48 AM Bug #42499 (Resolved): test_ceph_daemon.sh fails
- ...
- 12:34 AM Bug #42387 (Resolved): ceph_test_admin_socket_output fails in rados qa suite
10/26/2019
- 04:40 AM Bug #42477: Rados should use the '-o outfile' convention
- '-o' does not do what you inferred....
10/25/2019
- 07:05 PM Backport #41697: luminous: Network ping monitoring
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30230
merged - 07:04 PM Backport #41919: luminous: osd: scrub error on big objects; make bluestore refuse to start on big...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30785
merged - 07:03 PM Backport #42138: luminous: Remove unused full and nearful output from OSDMap summary
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30902
merged - 11:53 AM Bug #42486 (Resolved): Missing 'type' file and unable to infer osd type error
- While deploying ceph cluster using rook(master branch-commit 30d019acf025f) ceph-test.yaml . I am getting the followi...
- 11:49 AM Bug #42485 (Resolved): verify_upmaps can not cancel invalid upmap_items in some cases
- We can not cancel in verify_upmap if remap an osd to different root bucket,
cluster topology:
osd.0 ~ osd.29 belo... - 09:54 AM Bug #42346: Nearfull warnings are incorrect
- Could you add some more explanation to this issue?
* Which commands did you run?
* What was wrong`
* What did yo... - 07:37 AM Backport #42393 (Resolved): luminous: CephContext::CephContextServiceThread might pause for 5 sec...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31020
m... - 01:36 AM Backport #42393: luminous: CephContext::CephContextServiceThread might pause for 5 seconds at shu...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31020
merged - 07:36 AM Backport #40502 (Resolved): luminous: osd: rollforward may need to mark pglog dirty
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31036
m... - 01:35 AM Backport #40502: luminous: osd: rollforward may need to mark pglog dirty
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31036
merged - 04:31 AM Backport #41583: nautilus: backfill_toofull seen on cluster where the most full OSD is at 1%
- @Bryan I don't see any reason why not, but *at this moment* we are focusing on the next mimic release.
- 02:19 AM Bug #42476 (Resolved): ceph-objectstore-tool crashes trying to access meta objects
10/24/2019
- 10:05 PM Bug #42477 (Resolved): Rados should use the '-o outfile' convention
- I have a healthy Ceph cluster with Nautilus 14.2.4. When I issue 'rados df' on it, I get the correct result, both wit...
- 06:14 PM Bug #42476 (Resolved): ceph-objectstore-tool crashes trying to access meta objects
Caused by: https://tracker.ceph.com/issues/41913...- 05:47 PM Backport #41583: nautilus: backfill_toofull seen on cluster where the most full OSD is at 1%
- Is there any chance this could be merged in before the 14.2.5 release?
- 07:43 AM Bug #42455 (Fix Under Review): nautilus: "ceph smart" should not require write access
- 07:37 AM Bug #42455 (Resolved): nautilus: "ceph smart" should not require write access
- ...
- 06:47 AM Bug #42452 (Fix Under Review): msg/async: the event center is blocked by rdma construct conection...
- 04:14 AM Bug #42452: msg/async: the event center is blocked by rdma construct conection for transport ib s...
- How to trigger this Bug:
1. use async+rdma;
2. reboot a server;
3. observe cluster recovery time;
4. observe wh... - 03:05 AM Bug #42452 (Resolved): msg/async: the event center is blocked by rdma construct conection for tra...
- In msg/async/rdma, We construct a tcp connection to transport ib sync msg, if the
remote node is shutdown (shutdown ... - 01:35 AM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
- throttle_stamp is indeed zero....
10/23/2019
- 09:31 PM Support #42449 (New): Flushing cache pool will take months
- We have an EC pool for our main radosgw data. In front of this pool is/was a 3x replicated cache tier pool with "writ...
- 07:54 PM Backport #42395 (In Progress): nautilus: CephContext::CephContextServiceThread might pause for 5 ...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 07:52 PM Backport #42394 (In Progress): mimic: CephContext::CephContextServiceThread might pause for 5 sec...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:25 PM Backport #42141 (In Progress): nautilus: asynchronous recovery can not function under certain cir...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 12:24 PM Backport #41785 (Need More Info): nautilus: Make dumping of reservation info congruent between sc...
- this one is very difficult - one reason for the high difficulty level is that 35d0ce394f746158f2695efb4c09511eff82bd9...
- 08:07 AM Bug #41191: osd: scrub error on big objects; make bluestore refuse to start on big objects
- @Florian - the explanation is simple: we're working on mimic now. We currently don't have any policy saying "fix X mu...
- 07:41 AM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
- Current candidate is this stack. I'll continue with this in the morning as I need to set up an environment to analyse...
- 06:54 AM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
- Just an update.
I'm trying to zero in on this by adding an assert that triggers when a TrackedOp with initiated.is... - 02:19 AM Bug #42113: ceph -h usage should indicate CephChoices --name= is sometime required
- Matthew Oliver wrote:
> I'll try and address both in the same patch (or is that too much scope creep?)
Seems the...
10/22/2019
- 10:13 PM Bug #42113: ceph -h usage should indicate CephChoices --name= is sometime required
- Sebastian Wagner wrote:
> Marked #40801 as duplicate. Actually #40801 was first, but got little attention.
From w... - 10:38 AM Bug #42113: ceph -h usage should indicate CephChoices --name= is sometime required
- Marked #40801 as duplicate. Actually #40801 was first, but got little attention.
- 03:01 AM Bug #42113 (Fix Under Review): ceph -h usage should indicate CephChoices --name= is sometime requ...
- @Matthew thanks for your contribution! the reason why you could not assign this ticket is that you are not listed in ...
- 02:19 AM Bug #42113: ceph -h usage should indicate CephChoices --name= is sometime required
- Can't seem to assign this to myself. But I have the first version of the patch coming.
- 02:32 PM Bug #41191: osd: scrub error on big objects; make bluestore refuse to start on big objects
- Thanks! It looks like the backport for Mimic has already landed while the one for Nautilus is still pending. That str...
- 10:30 AM Feature #41831 (Resolved): tools/rados: allow list objects in a specific pg in a pool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:30 AM Bug #41875 (Resolved): Segmentation fault in rados ls when using --pgid and --pool/-p together as...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:30 AM Cleanup #41876 (Resolved): tools/rados: add --pgid in help
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:26 AM Backport #41964 (Resolved): mimic: Segmentation fault in rados ls when using --pgid and --pool/-p...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30893
m... - 07:26 AM Backport #41961 (Resolved): mimic: tools/rados: add --pgid in help
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30893
m... - 07:26 AM Backport #41844 (Resolved): mimic: tools/rados: allow list objects in a specific pg in a pool
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30893
m... - 07:26 AM Backport #42128 (Resolved): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30898
m... - 07:25 AM Backport #42154 (Resolved): mimic: Removed OSDs with outstanding peer failure reports crash the m...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30903
m... - 07:25 AM Backport #42240: mimic: Adding Placement Group id in Large omap log message
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30924
m... - 01:01 AM Backport #42240 (Resolved): mimic: Adding Placement Group id in Large omap log message
- 07:23 AM Backport #42036 (Resolved): mimic: Enable auto-scaler and get src/osd/PeeringState.cc:3671: faile...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30895
m... - 07:23 AM Backport #42137 (Resolved): mimic: Remove unused full and nearful output from OSDMap summary
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30901
m... - 07:22 AM Backport #42362 (Resolved): mimic: python3-cephfs should provide python36-cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30982
m... - 07:13 AM Backport #42361 (Resolved): luminous: python3-cephfs should provide python36-cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30981
m... - 07:00 AM Backport #40082 (Resolved): luminous: osd: Better error message when OSD count is less than osd_p...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30298
m... - 06:43 AM Bug #42411 (Resolved): nautilus:osd: network numa affinity not supporting subnet port
- !subnet_error.png!
- 02:13 AM Bug #42387 (Fix Under Review): ceph_test_admin_socket_output fails in rados qa suite
10/21/2019
- 11:53 PM Backport #41964: mimic: Segmentation fault in rados ls when using --pgid and --pool/-p together a...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30893
merged - 11:52 PM Backport #41961: mimic: tools/rados: add --pgid in help
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30893
merged - 11:52 PM Backport #41844: mimic: tools/rados: allow list objects in a specific pg in a pool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30893
merged - 11:52 PM Backport #42128: mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30898
merged - 11:51 PM Backport #42154: mimic: Removed OSDs with outstanding peer failure reports crash the monitor
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30903
merged - 11:51 PM Backport #42240: mimic: Adding Placement Group id in Large omap log message
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30924
merged - 11:47 PM Backport #42036: mimic: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert in...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30895
merged - 11:46 PM Backport #42137: mimic: Remove unused full and nearful output from OSDMap summary
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30901
merged - 11:45 PM Backport #42362: mimic: python3-cephfs should provide python36-cephfs
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/30982
merged - 10:02 PM Backport #42125 (In Progress): nautilus: weird daemon key seen in health alert
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:50 PM Backport #41705 (In Progress): nautilus: Incorrect logical operator in Monitor::handle_auth_reque...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:48 PM Backport #41548 (In Progress): nautilus: monc: send_command to specific down mon breaks other mon...
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:35 PM Backport #40502 (In Progress): luminous: osd: rollforward may need to mark pglog dirty
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:34 PM Backport #40503 (In Progress): mimic: osd: rollforward may need to mark pglog dirty
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:30 PM Backport #40504 (In Progress): nautilus: osd: rollforward may need to mark pglog dirty
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:14 PM Backport #42202 (In Progress): luminous: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:11 PM Backport #42201 (In Progress): mimic: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:09 PM Backport #42200 (In Progress): nautilus: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:06 PM Backport #42199 (In Progress): luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:05 PM Backport #42198 (In Progress): mimic: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:04 PM Backport #42197 (In Progress): nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Updated automatically by ceph-backport.sh version 15.0.0.6270
- 09:04 PM Feature #42321 (Fix Under Review): Add a new mode to balance pg layout by primary osds
- 06:18 PM Backport #42361: luminous: python3-cephfs should provide python36-cephfs
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/30981
merged - 01:00 PM Backport #42393 (In Progress): luminous: CephContext::CephContextServiceThread might pause for 5 ...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 11:52 AM Backport #42393 (Need More Info): luminous: CephContext::CephContextServiceThread might pause for...
- luminous is close to EOL
- 08:19 AM Backport #42393 (Resolved): luminous: CephContext::CephContextServiceThread might pause for 5 sec...
- https://github.com/ceph/ceph/pull/31020
- 12:29 PM Bug #42332: CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
- Nathan Cutler wrote:
> @Jason - is this issue serious enough to warrant a luminous backport at this stage?
It's a... - 11:52 AM Bug #42332: CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
- @Jason - is this issue serious enough to warrant a luminous backport at this stage?
- 04:15 AM Bug #42332 (Pending Backport): CephContext::CephContextServiceThread might pause for 5 seconds at...
- 11:30 AM Bug #42387: ceph_test_admin_socket_output fails in rados qa suite
- being tested at http://pulpito.ceph.com/kchai-2019-10-21_11:21:40-rados-wip-before-asok-changes-distro-basic-mira/.
... - 04:44 AM Bug #42387: ceph_test_admin_socket_output fails in rados qa suite
- pushed ceph_test_admin_socket_output to ceph-ci, so i can rerun this test without Sage's asock changes.
- 03:30 AM Bug #42387 (Resolved): ceph_test_admin_socket_output fails in rados qa suite
- osd.0...
- 09:21 AM Bug #42225 (Fix Under Review): target_max_bytes and target_max_objects should accept values in [M...
- https://github.com/ceph/ceph/pull/31010
- 08:19 AM Backport #42395 (Resolved): nautilus: CephContext::CephContextServiceThread might pause for 5 sec...
- https://github.com/ceph/ceph/pull/31097
- 08:19 AM Backport #42394 (Resolved): mimic: CephContext::CephContextServiceThread might pause for 5 second...
- https://github.com/ceph/ceph/pull/31096
10/18/2019
- 02:47 PM Backport #40082: luminous: osd: Better error message when OSD count is less than osd_pool_default...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30298
merged - 11:35 AM Bug #42360: python3-cephfs should provide python36-cephfs
- Output from backport-create-issue:...
- 03:56 AM Bug #42360 (Pending Backport): python3-cephfs should provide python36-cephfs
- 03:55 AM Bug #42360 (In Progress): python3-cephfs should provide python36-cephfs
- 03:54 AM Bug #42360 (Resolved): python3-cephfs should provide python36-cephfs
- when upgrading from v12 to v13:...
- 08:02 AM Bug #42115 (Resolved): Turn off repair pg state when leaving recovery
- 06:49 AM Backport #41959 (Resolved): luminous: tools/rados: add --pgid in help
- backport PR https://github.com/ceph/ceph/pull/30608
merge commit 3f135f58f62212d540e5ba73b923773a9fa199c7 (v12.2.12-... - 06:49 AM Backport #41962 (Resolved): luminous: Segmentation fault in rados ls when using --pgid and --pool...
- backport PR https://github.com/ceph/ceph/pull/30608
merge commit 3f135f58f62212d540e5ba73b923773a9fa199c7 (v12.2.12-... - 06:48 AM Backport #41845 (Resolved): luminous: tools/rados: allow list objects in a specific pg in a pool
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30608
m... - 06:40 AM Backport #42153 (Resolved): luminous: Removed OSDs with outstanding peer failure reports crash th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30905
m... - 06:38 AM Backport #42241: luminous: Adding Placement Group id in Large omap log message
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30922
m... - 06:37 AM Backport #42127 (Resolved): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30926
m... - 06:28 AM Bug #42366 (New): reweight-subtree will not update weight for shadowed node
- reweight subtree will only update osd weight in main tree, but those weight in shadow node (like myrack~hdd) will not...
- 06:04 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
- https://tracker.ceph.com/issues/22570
@Greg, my problem is related to this tracker.
Problem can be resolved and... - 04:20 AM Backport #42363 (In Progress): nautilus: python3-cephfs should provide python36-cephfs
- 03:59 AM Backport #42363 (Resolved): nautilus: python3-cephfs should provide python36-cephfs
- https://github.com/ceph/ceph/pull/30983
- 04:16 AM Backport #42362 (In Progress): mimic: python3-cephfs should provide python36-cephfs
- 03:59 AM Backport #42362 (Resolved): mimic: python3-cephfs should provide python36-cephfs
- https://github.com/ceph/ceph/pull/30982
- 04:12 AM Backport #42361 (In Progress): luminous: python3-cephfs should provide python36-cephfs
- 03:58 AM Backport #42361 (Resolved): luminous: python3-cephfs should provide python36-cephfs
- https://github.com/ceph/ceph/pull/30981
10/17/2019
- 11:08 PM Backport #42241 (Resolved): luminous: Adding Placement Group id in Large omap log message
- 11:00 PM Backport #42241: luminous: Adding Placement Group id in Large omap log message
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30922
merged - 11:02 PM Backport #41959: luminous: tools/rados: add --pgid in help
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30608
merged - 11:02 PM Backport #41962: luminous: Segmentation fault in rados ls when using --pgid and --pool/-p togethe...
- Vikhyat Umrao wrote:
> https://github.com/ceph/ceph/pull/30608
merged - 11:01 PM Backport #41845: luminous: tools/rados: allow list objects in a specific pg in a pool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30608
merged - 11:01 PM Backport #42153: luminous: Removed OSDs with outstanding peer failure reports crash the monitor
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30905
merged - 11:00 PM Backport #42127: luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30926
merged - 01:54 PM Feature #42351 (New): Ability to discover CIDR by introspecting Network interface
- Rook wants to implement Multus Network and currently, the IPAM static type (one of the most common ones) does not imp...
- 10:21 AM Bug #42347: nautilus assert during osd shutdown: FAILED ceph_assert((sharded_in_flight_list.back(...
- coredump and log file @ ceph-post-file: a0fcd877-46da-4491-9e58-5ae117cfb92b
- 09:09 AM Bug #42347 (Won't Fix): nautilus assert during osd shutdown: FAILED ceph_assert((sharded_in_fligh...
- Looks like #38377, but that is already fixed in nautilus.
We see this occasionally during OSD shutdown:... - 08:25 AM Bug #42346: Nearfull warnings are incorrect
- osd.96 is near full
96 hdd 10.00000 1.00000 9.1 TiB 6.8 TiB 6.8 TiB 84 KiB 17 GiB 2.3 TiB 74.57 1.20 3... - 08:12 AM Bug #42346 (Resolved): Nearfull warnings are incorrect
- OSD_NEARFULL 2 nearfull osd(s)
osd.53 is near full
53 hdd 9.09470 1.00000 9.1 TiB 1.3 GiB 287 MiB 0... - 08:15 AM Bug #26958 (Resolved): osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log().get_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:09 AM Backport #41449 (Resolved): mimic: mon: C_AckMarkedDown has not handled the Callback Arguments
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30213
m... - 06:21 AM Backport #39537 (Resolved): luminous: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()-...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28989
m... - 06:21 AM Bug #42341: OSD PGs are not being purged
- This happens during data copy or rebalance.
It's a major issue because ceph only copies data to the FULLEST OSD, for...
10/16/2019
- 11:26 PM Backport #41449: mimic: mon: C_AckMarkedDown has not handled the Callback Arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30213
merged - 11:07 PM Backport #39537: luminous: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log()....
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28989
merged - 12:48 PM Bug #42341 (New): OSD PGs are not being purged
- related ML thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-October/037017.html
Apparently some PG... - 09:02 AM Feature #40420 (Resolved): Introduce an ceph.conf option to disable HEALTH_WARN when nodeep-scrub...
- I posed David's question on backport targets at https://github.com/ceph/ceph/pull/29422#issuecomment-532215897 and it...
10/15/2019
- 10:13 PM Bug #42332 (In Progress): CephContext::CephContextServiceThread might pause for 5 seconds at shut...
- 10:11 PM Bug #42332 (Resolved): CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
- The entry loop in CephContext::CephContextServiceThread doesn't check for thread exit prior to waiting. This can resu...
- 07:52 PM Backport #41918 (Resolved): mimic: osd: scrub error on big objects; make bluestore refuse to star...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30784
m... - 07:44 PM Backport #41918: mimic: osd: scrub error on big objects; make bluestore refuse to start on big ob...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30784
merged - 05:25 PM Backport #42326 (In Progress): nautilus: max_size from crushmap ignored when increasing size on pool
- 09:47 AM Backport #42326 (Resolved): nautilus: max_size from crushmap ignored when increasing size on pool
- https://github.com/ceph/ceph/pull/30941
- 11:18 AM Bug #42328 (Resolved): osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- Observing on the recent master when running rbd suite [1]:...
- 08:54 AM Feature #42321 (Fix Under Review): Add a new mode to balance pg layout by primary osds
- There already have upmap optimizer since Luminous version. The upmap optimizer is help for balancing PGs across OSDs,...
- 08:47 AM Bug #42060: Slow ops seen when one ceph private interface is shut down
- Yes, ~3 minutes after disabling the network, the OSDs became down. I started the network after 5 minutes and until th...
- 07:51 AM Backport #42126 (In Progress): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 07:45 AM Backport #42127 (In Progress): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 07:40 AM Backport #42127 (New): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 07:39 AM Backport #42128 (In Progress): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 07:31 AM Backport #41844 (In Progress): mimic: tools/rados: allow list objects in a specific pg in a pool
- 07:23 AM Bug #36732: tools/rados: fix segmentation fault
- This fix was merged before the v14.2.0 (nautilus) release.
Backports:
* luminous https://github.com/ceph/ceph/p... - 06:39 AM Backport #42240 (In Progress): mimic: Adding Placement Group id in Large omap log message
- 06:35 AM Backport #42242 (In Progress): nautilus: Adding Placement Group id in Large omap log message
- 06:33 AM Backport #42241 (In Progress): luminous: Adding Placement Group id in Large omap log message
Also available in: Atom