Project

General

Profile

Activity

From 06/18/2020 to 07/17/2020

07/17/2020

06:10 PM Bug #46596 (Resolved): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***: /usr/bi...
Kefu Chai
03:47 PM Bug #46596 (Fix Under Review): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***:...
https://github.com/ceph/ceph-container/pull/1712 Kefu Chai
11:53 AM Bug #46596: ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***: /usr/bin/ceph-osd ...
there is a small possibility that this is related to https://github.com/ceph/ceph/pull/33770 Sebastian Wagner
11:24 AM Bug #46596 (Resolved): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***: /usr/bi...
This is likely a regression that was merged yesterday into master (July 16th).... Sebastian Wagner
05:56 PM Backport #46017 (Resolved): nautilus: ceph_test_rados_watch_notify hang
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36031
m...
Nathan Cutler
05:39 PM Backport #46017: nautilus: ceph_test_rados_watch_notify hang
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36031
merged
Yuri Weinstein
05:56 PM Backport #46164 (Resolved): nautilus: osd: make message cap option usable again
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35738
m...
Nathan Cutler
05:38 PM Backport #46164: nautilus: osd: make message cap option usable again
Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/35738
merged
Yuri Weinstein
05:27 PM Bug #46603 (New): osd/osd-backfill-space.sh: TEST_ec_backfill_simple: return 1
... Neha Ojha
04:16 PM Backport #46090 (In Progress): nautilus: PG merge: FAILED ceph_assert(info.history.same_interval_...
Nathan Cutler
02:44 PM Bug #40081: mon: luminous crash attempting to decode maps after nautilus quorum has been formed
https://github.com/ceph/ceph/pull/28671 was closed
That PR was cherry-picked to nautilus via https://github.com/ce...
Nathan Cutler
11:17 AM Backport #46595 (Resolved): octopus: crash in Objecter and CRUSH map lookup
https://github.com/ceph/ceph/pull/36662 Nathan Cutler
11:17 AM Bug #44314 (Resolved): osd-backfill-stats.sh failing intermittently in TEST_backfill_sizeup_out()...
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
11:16 AM Bug #45606 (Resolved): build_incremental_map_msg missing incremental map while snaptrim or backfi...
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
11:16 AM Bug #45733 (Resolved): osd-scrub-repair.sh: SyntaxError: invalid syntax
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
11:16 AM Bug #45795 (Resolved): PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recovery_backfill().e...
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
11:16 AM Bug #45943 (Resolved): Ceph Monitor heartbeat grace period does not reset.
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
11:15 AM Bug #46053 (Resolved): osd: wakeup all threads of shard rather than one thread
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
11:14 AM Backport #46587 (Resolved): nautilus: The default value of osd_scrub_during_recovery is false sin...
https://github.com/ceph/ceph/pull/37472 Nathan Cutler
11:14 AM Backport #46586 (Resolved): octopus: The default value of osd_scrub_during_recovery is false sinc...
https://github.com/ceph/ceph/pull/36661 Nathan Cutler
11:13 AM Backport #45890 (Resolved): nautilus: osd: pg stuck in waitactingchange when new acting set doesn...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35389
m...
Nathan Cutler
11:13 AM Backport #45883 (Resolved): nautilus: osd-scrub-repair.sh: SyntaxError: invalid syntax
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35388
m...
Nathan Cutler
11:12 AM Backport #45776 (Resolved): nautilus: build_incremental_map_msg missing incremental map while sna...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35386
m...
Nathan Cutler
11:09 AM Backport #46286 (Resolved): octopus: mon: log entry with garbage generated by bad memory access
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36035
m...
Nathan Cutler
11:06 AM Backport #46261 (Resolved): octopus: larger osd_scrub_max_preemptions values cause Floating point...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36034
m...
Nathan Cutler
11:06 AM Backport #46089 (Resolved): octopus: PG merge: FAILED ceph_assert(info.history.same_interval_sinc...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36033
m...
Nathan Cutler
11:05 AM Backport #46086 (Resolved): octopus: osd: wakeup all threads of shard rather than one thread
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36032
m...
Nathan Cutler
11:05 AM Backport #46016 (Resolved): octopus: osd-backfill-stats.sh failing intermittently in TEST_backfil...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36030
m...
Nathan Cutler
11:05 AM Backport #46007 (Resolved): octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recover...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36029
m...
Nathan Cutler

07/16/2020

05:09 PM Backport #45890: nautilus: osd: pg stuck in waitactingchange when new acting set doesn't change
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35389
merged
Yuri Weinstein
05:08 PM Backport #45883: nautilus: osd-scrub-repair.sh: SyntaxError: invalid syntax
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35388
merged
Yuri Weinstein
05:08 PM Backport #45776: nautilus: build_incremental_map_msg missing incremental map while snaptrim or ba...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35386
merged
Yuri Weinstein
04:30 PM Backport #46286: octopus: mon: log entry with garbage generated by bad memory access
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36035
merged
Yuri Weinstein
01:14 PM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
Markus do you have a coredump available for further debugging? Dan van der Ster
12:52 PM Bug #43365: Nautilus: Random mon crashes in failed assertion at ceph::time_detail::signedspan
So, we experienced this bug as well, so I investigated it myself, though I'm no timekeeping, ceph, or c++ expert, jus... Anonymous
11:36 AM Documentation #46554 (Resolved): Malformed sentence in RADOS page
Zac Dover
09:54 AM Bug #42668 (Won't Fix): ceph daemon osd.* fails in osd container but ceph daemon mds.* does not f...
Ben, just run `unset CEPH_ARGS` once in the OSD container, then you will be able to use the socket commands. Sébastien Han
09:23 AM Bug #44311 (Pending Backport): crash in Objecter and CRUSH map lookup
Kefu Chai
02:38 AM Bug #46562 (Rejected): ceph tell PGID scrub/deep_scrub stopped working

At this point I don't know if was broken by https://github.com/ceph/ceph/pull/30217 change.
David Zafman

07/15/2020

09:30 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
/a/yuriw-2020-07-13_23:06:23-rados-wip-yuri5-testing-2020-07-13-1944-octopus-distro-basic-smithi/5224649 Neha Ojha
04:36 PM Documentation #46554: Malformed sentence in RADOS page
Item 22 here:
https://pad.ceph.com/p/Report_Documentation_Bugs
Zac Dover
04:32 PM Documentation #46554 (Resolved): Malformed sentence in RADOS page
https://docs.ceph.com/docs/master/rados/
The current sentence:
Once you have a deployed a Ceph Storage Clust...
Zac Dover
03:58 PM Documentation #45988 (Resolved): [doc/os]: Centos 8 is not listed even though it is supported
Zac Dover
11:09 AM Documentation #46545 (New): Two Developer Guide pages might be redundant
https://docs.ceph.com/docs/master/dev/internals/
and
https://docs.ceph.com/docs/master/dev/developer_guide/
...
Zac Dover
10:58 AM Documentation #46531 (Pending Backport): The default value of osd_scrub_during_recovery is false ...
Kefu Chai
10:24 AM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
/a//kchai-2020-07-15_09:19:03-rados-wip-kefu-testing-2020-07-13-2108-distro-basic-smithi/5228761 Kefu Chai

07/14/2020

10:50 PM Bug #44311 (Fix Under Review): crash in Objecter and CRUSH map lookup
Jason Dillaman
04:43 PM Bug #44311: crash in Objecter and CRUSH map lookup
I am hitting this all the time now that librbd is using the 'neorados' API [1]. I plan to just rebuild the rmaps when... Jason Dillaman
04:41 PM Bug #44311 (In Progress): crash in Objecter and CRUSH map lookup
Jason Dillaman
07:12 PM Backport #46228 (Resolved): nautilus: Ceph Monitor heartbeat grace period does not reset.
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35798
m...
Nathan Cutler
04:22 PM Backport #46228: nautilus: Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/35798
merged
Yuri Weinstein
07:02 PM Documentation #46531 (Fix Under Review): The default value of osd_scrub_during_recovery is false ...
Nathan Cutler
12:05 PM Documentation #46531: The default value of osd_scrub_during_recovery is false since v11.1.1
I can't figure out how to add the pull request ID above, so here's a link to it instead: https://github.com/ceph/ceph... Benoît Knecht
11:58 AM Documentation #46531 (Resolved): The default value of osd_scrub_during_recovery is false since v1...
Since 8dca17c, `osd_scrub_during_recovery` defaults to `false`, but the documentation was still stating that its defa... Benoît Knecht
06:58 PM Bug #37532 (Fix Under Review): mon: expected_num_objects warning triggers on bluestore-only setups
Nathan Cutler
08:58 AM Bug #37532: mon: expected_num_objects warning triggers on bluestore-only setups
Joao Eduardo Luis wrote:
> I don't think it's wise to simply remove the code because filestore is no longer the defa...
yunqing wang
04:19 PM Backport #46261: octopus: larger osd_scrub_max_preemptions values cause Floating point exception
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36034
merged
Yuri Weinstein
04:18 PM Backport #46089: octopus: PG merge: FAILED ceph_assert(info.history.same_interval_since != 0)
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36033
merged
Yuri Weinstein
04:18 PM Backport #46086: octopus: osd: wakeup all threads of shard rather than one thread
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36032
merged
Yuri Weinstein
04:17 PM Backport #46016: octopus: osd-backfill-stats.sh failing intermittently in TEST_backfill_sizeup_ou...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36030
merged
Yuri Weinstein
04:16 PM Backport #46007: octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recovery_backfill(...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36029
merged
Yuri Weinstein
04:01 PM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
/a/yuriw-2020-07-13_19:30:53-rados-wip-yuri6-testing-2020-07-13-1520-octopus-distro-basic-smithi/5223525 Neha Ojha
01:20 PM Bug #17170: mon/monclient: update "unable to obtain rotating service keys when osd init" to sugge...
issue fixed after setting correct NTP server on the machines.
followed the instructions here: https://access.redhat....
Yuval Lifshitz
09:36 AM Bug #17170: mon/monclient: update "unable to obtain rotating service keys when osd init" to sugge...
issue still seen in pacific dev version:... Yuval Lifshitz

07/13/2020

07:32 PM Bug #46508: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
Neha Ojha wrote:
> Does not look related to https://tracker.ceph.com/issues/45619 or caused by https://github.com/ce...
Neha Ojha
07:30 PM Bug #46508: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
Does not look related to https://tracker.ceph.com/issues/45619 or caused by https://github.com/ceph/ceph/commit/d4fba... Neha Ojha
07:02 PM Bug #46508 (New): Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados ... Neha Ojha
05:48 PM Bug #46506 (New): RuntimeError: Exiting scrub checking -- not all pgs scrubbed.
... Neha Ojha
01:20 PM Documentation #16356 (Resolved): doc: manual deployment of ceph monitor needs fix
https://github.com/ceph/ceph/pull/31452 resolves this issue. Zac Dover
11:19 AM Bug #46445 (Fix Under Review): nautilis client may hunt for mon very long if msg v2 is not enable...
Mykola Golub
05:04 AM Bug #46242: rados -p default.rgw.buckets.data returning over millions objects No such file or dir...
Hi Josh,
Object still exist:
Just done your test and failed with single quotes -> https://gyazo.com/a04fcf5b522...
Manuel Rios

07/10/2020

09:39 PM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
fa842716b6dc3b2077e296d388c646f1605568b0 arrived in v14.2.10 and touches _committed_osd_maps Dan van der Ster
07:42 AM Bug #46443 (Resolved): ceph_osd crash in _committed_osd_maps when failed to encode first inc map
We upgraded a mimic cluster to v14.2.10, everything was running and ok.
I triggerd an monmap change with the command...
Markus Binz
09:32 PM Backport #46460 (In Progress): octopus: pybind/mgr/balancer: should use "==" and "!=" for compari...
Nathan Cutler
05:50 PM Backport #46460 (Resolved): octopus: pybind/mgr/balancer: should use "==" and "!=" for comparing ...
https://github.com/ceph/ceph/pull/36036 Nathan Cutler
09:31 PM Backport #46286 (In Progress): octopus: mon: log entry with garbage generated by bad memory access
Nathan Cutler
09:30 PM Backport #46261 (In Progress): octopus: larger osd_scrub_max_preemptions values cause Floating po...
Nathan Cutler
09:29 PM Backport #46089 (In Progress): octopus: PG merge: FAILED ceph_assert(info.history.same_interval_s...
Nathan Cutler
09:28 PM Backport #46086 (In Progress): octopus: osd: wakeup all threads of shard rather than one thread
Nathan Cutler
09:27 PM Backport #46017 (In Progress): nautilus: ceph_test_rados_watch_notify hang
Nathan Cutler
09:26 PM Backport #46018 (Resolved): octopus: ceph_test_rados_watch_notify hang
The original fix went into octopus during the pre-release phase when bugfixes were being merged to octopus and octopu... Nathan Cutler
09:24 PM Backport #46016 (In Progress): octopus: osd-backfill-stats.sh failing intermittently in TEST_back...
Nathan Cutler
09:23 PM Backport #46007 (In Progress): octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_reco...
Nathan Cutler
08:46 PM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
Couple of updates on this:
1. Reproduced the issue with some extra debug logging.
https://pulpito.ceph.com/no...
Neha Ojha
05:50 PM Backport #46461 (Resolved): nautilus: pybind/mgr/balancer: should use "==" and "!=" for comparing...
https://github.com/ceph/ceph/pull/37471 Nathan Cutler
08:20 AM Bug #46445 (Resolved): nautilis client may hunt for mon very long if msg v2 is not enabled on mons
The problem is observed for a nautilus client. For newer client versions the situation is accidentally much better (s... Mykola Golub
06:10 AM Backport #46229 (Resolved): octopus: Ceph Monitor heartbeat grace period does not reset.
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35799
m...
Nathan Cutler
06:09 AM Backport #46165 (Resolved): octopus: osd: make message cap option usable again
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35737
m...
Nathan Cutler

07/09/2020

07:08 PM Bug #46437 (Closed): Admin Socket leaves behind .asok files after daemons (ex: RGW) shut down gra...
Reproducer(s):
0. be in build dir
1. run vstart.sh
2. edit stop.sh to not `rm -rf "${asok_dir}"`
3. do ls of /tmp...
Ali Maredia
02:16 PM Backport #46408 (In Progress): octopus: Health check failed: 4 mgr modules have failed (MGR_MODUL...
Kefu Chai
02:34 AM Bug #46428 (In Progress): mon: all the 3 mon daemons crashed when running the fs aio test
The logs:... Xiubo Li

07/08/2020

09:19 PM Bug #46125: ceph mon memory increasing
You can attempt to use a lower target, it's not something we've tested much for the monitors. We expect the monitor t... Josh Durgin
09:17 PM Bug #46242: rados -p default.rgw.buckets.data returning over millions objects No such file or dir...
Can you verify the object name is passed to 'rados rm' correctly by enclosing it in single quotes?
Is it possible ...
Josh Durgin
07:34 PM Backport #46229: octopus: Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/35799
merged
Yuri Weinstein
07:32 PM Backport #46165: octopus: osd: make message cap option usable again
Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/35737
merged
Yuri Weinstein
06:49 PM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-comp... Neha Ojha
06:44 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
Since the original feature is being backported to nautilus and octopus.
/a/yuriw-2020-07-06_17:23:10-rados-wip-yur...
Neha Ojha
03:27 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
https://pulpito.ceph.com/nojha-2020-07-08_01:02:55-rados:standalone-master-distro-basic-smithi/ Neha Ojha
01:05 AM Bug #46405 (Resolved): osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
... Neha Ojha
06:37 PM Bug #45647: "ceph --cluster ceph --log-early osd last-stat-seq osd.0" times out due to msgr-failu...
/a/yuriw-2020-07-06_19:37:47-rados-wip-yuri7-testing-2020-07-06-1754-octopus-distro-basic-smithi/5204335 Neha Ojha
06:21 PM Bug #45761: mon_thrasher: "Error ENXIO: mon unavailable" during sync_force command leads to "fail...
/a/yuriw-2020-07-06_19:37:47-rados-wip-yuri7-testing-2020-07-06-1754-octopus-distro-basic-smithi/5204398 Neha Ojha
04:06 PM Bug #45139: osd/osd-markdown.sh: markdown_N_impl failure
/a/nojha-2020-07-08_01:02:55-rados:standalone-master-distro-basic-smithi/5207257 Neha Ojha
01:57 PM Documentation #46421 (In Progress): Add LoadBalancer Guide
I'm adding the LoadBalancer Guide, and I'm going to put a link to it on the Install Page.
In a future version of d...
Zac Dover
10:17 AM Backport #46408 (New): octopus: Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
Laura Paduano
07:46 AM Backport #46408 (In Progress): octopus: Health check failed: 4 mgr modules have failed (MGR_MODUL...
Laura Paduano
05:29 AM Backport #46408 (Resolved): octopus: Health check failed: 4 mgr modules have failed (MGR_MODULE_E...
https://github.com/ceph/ceph/pull/35995 Nathan Cutler
01:12 AM Bug #46224 (Pending Backport): Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
/a/yuriw-2020-07-06_19:37:47-rados-wip-yuri7-testing-2020-07-06-1754-octopus-distro-basic-smithi/5204440/ Neha Ojha

07/07/2020

11:17 AM Backport #46372 (In Progress): osd: expose osdspec_affinity to osd_metadata
Nathan Cutler
11:14 AM Backport #46372 (New): osd: expose osdspec_affinity to osd_metadata
Follow-up of https://github.com/ceph/ceph/pull/34835
Fixes: https://tracker.ceph.com/issues/44755
Nathan Cutler
11:15 AM Bug #44755: Create stronger affinity between drivegroup specs and osd daemons
Moving to RADOS project so it can be backported in the usual way. Nathan Cutler
06:56 AM Bug #46381 (New): pg down error and osd failed by :Objecter::_op_submit_with_budget
ENV:
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7...
Amine Liu

07/06/2020

10:29 PM Feature #46379: Add a force-scrub commands to bump already running scrubs
Maybe repair is always “force-repair”. It could be confusing to the user to do that. David Zafman
10:19 PM Feature #46379 (New): Add a force-scrub commands to bump already running scrubs

As it stands a user requested scrub gets first priority to start. However, if existing scrubs are already running,...
David Zafman
10:11 PM Feature #41363: Allow user to cancel scrub requests

Possible implementations:...
David Zafman
12:15 PM Backport #46372 (Duplicate): osd: expose osdspec_affinity to osd_metadata
Joshua Schmid
12:12 PM Backport #46372 (Resolved): osd: expose osdspec_affinity to osd_metadata
https://github.com/ceph/ceph/pull/35957 Joshua Schmid
10:24 AM Bug #43174: pgs inconsistent, union_shard_errors=missing
Our partners noticed that actually there is an issue with how the bluestore escapes the key strings. Here is their pa... Mykola Golub
05:43 AM Bug #43174: pgs inconsistent, union_shard_errors=missing
One of our customers also experienced this issue after adding bluestore osds to a filestore backed cluster.
Using ...
Mykola Golub

07/05/2020

12:05 PM Documentation #46361 (New): Update list of leads in the Developer Guide
https://docs.ceph.com/docs/master/dev/developer_guide/essentials/
Make sure that this list is up-to-date as of mid...
Zac Dover
12:02 PM Bug #46359 (Resolved): Install page has typo: s/suites/suits/
Zac Dover
08:26 AM Bug #46359 (Fix Under Review): Install page has typo: s/suites/suits/
Zac Dover
08:21 AM Bug #46359 (Resolved): Install page has typo: s/suites/suits/
https://docs.ceph.com/docs/master/install/
This page contains the following sentence:
Choose the method tha...
Zac Dover
02:44 AM Bug #46358 (Rejected): FAIL: test_full_health (tasks.mgr.dashboard.test_health.HealthTest)
https://github.com/ceph/ceph/pull/33827 has not been merged yet Kefu Chai
02:42 AM Bug #46358 (Rejected): FAIL: test_full_health (tasks.mgr.dashboard.test_health.HealthTest)
... Kefu Chai

07/03/2020

12:22 PM Backport #46115 (Resolved): octopus: Add statfs output to ceph-objectstore-tool
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35715
m...
Nathan Cutler
12:30 AM Bug #43888 (In Progress): osd/osd-bench.sh 'tell osd.N bench' hang
/a/dzafman-2020-06-08_11:45:40-rados-wip-zafman-testing-distro-basic-smithi/5130086
This is the command we care ab...
Neha Ojha

07/02/2020

04:46 PM Bug #46285 (Rejected): osd: error from smartctl is always reported as invalid JSON
turns out the report was from an earlier version (it did not contain the 'output' key) Josh Durgin
04:37 PM Bug #46179 (Duplicate): Health check failed: Reduced data availability: PG_AVAILABILITY
Neha Ojha
04:36 PM Bug #46225 (Duplicate): Health check failed: 1 osds down (OSD_DOWN)
Neha Ojha
01:35 PM Bug #46264: mon: check for mismatched daemon versions
I have completed a function called check_daemon_version located in src/mon/Monitor.cc This function goes through mon_... Tyler Sheehan
09:48 AM Bug #44755 (Pending Backport): Create stronger affinity between drivegroup specs and osd daemons
Sebastian Wagner
09:04 AM Bug #46178 (Duplicate): slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+know...
Ilya Dryomov
08:56 AM Bug #46180 (Resolved): qa: Scrubbing terminated -- not all pgs were active and clean.
Will be cherry-picked into https://github.com/ceph/ceph/pull/35720 and https://github.com/ceph/ceph/pull/35733. Ilya Dryomov

07/01/2020

10:55 PM Bug #46325 (Rejected): A pool at size 3 should have a min_size 2

The get_osd_pool_default_min_size() calculation of size - size/2 for the min_size should special case size 3 and ju...
David Zafman
10:03 PM Bug #37509 (Can't reproduce): require past_interval bounds mismatch due to osd oldest_map
Neha Ojha
09:58 PM Bug #23879 (Can't reproduce): test_mon_osdmap_prune.sh fails
Neha Ojha
09:57 PM Bug #23857 (Can't reproduce): flush (manifest) vs async recovery causes out of order op
Neha Ojha
09:56 PM Bug #23828 (Can't reproduce): ec gen object leaks into different filestore collection just after ...
Neha Ojha
09:53 PM Bug #23117: PGs stuck in "activating" after osd_max_pg_per_osd_hard_ratio has been exceeded once
We should try to make it more obvious when this limit is hit. I thought we added something in the cluster logs about ... Neha Ojha
09:49 PM Documentation #46324 (New): Sepia VPN Client Access documentation is out-of-date
https://wiki.sepia.ceph.com/doku.php?id=vpnaccess#vpn_client_access
There are two issues that I noticed that must ...
Zac Dover
09:49 PM Bug #20960 (Can't reproduce): ceph_test_rados: mismatched version (due to pg import/export)
The thrash_cache_writeback_proxy_none failure has a different root cause, opened a new tracker for it https://tracker... Neha Ojha
09:47 PM Bug #46323 (Resolved): thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value...
... Neha Ojha
09:35 PM Bug #19700 (Closed): OSD remained up despite cluster network being inactive?
Please reopen this bug if the issue is seen in nautilus or newer releases. Neha Ojha
09:22 PM Bug #43882 (Can't reproduce): osd to mon connection lost, osd stuck down
Neha Ojha
09:16 PM Bug #44631 (Can't reproduce): ceph pg dump error code 124
Neha Ojha
07:58 PM Bug #46275: Cancellation of on-going scrubs
We may be able to easily terminate scrubbing in between chunks if the noscrub/nodeep-scrub get set.
I will test this.
David Zafman
07:56 PM Bug #46275 (In Progress): Cancellation of on-going scrubs
David Zafman
07:32 PM Backport #46095 (Resolved): octopus: Issue health status warning if num_shards_repaired exceeds s...
Josh Durgin
07:22 PM Backport #46115: octopus: Add statfs output to ceph-objectstore-tool
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35715
merged
Yuri Weinstein
06:00 PM Bug #46318 (Need More Info): mon_recovery: quorum_status times out
... Neha Ojha
05:05 PM Bug #46285: osd: error from smartctl is always reported as invalid JSON
Which version is this cluster running?
I would expect to see this "output" key in the command's output:
https://g...
Yaarit Hatuka
02:43 AM Bug #46285 (Rejected): osd: error from smartctl is always reported as invalid JSON
When smartctl returns an error, the osd always reports it as invalid json. We meant to give a better error, but the c... Josh Durgin
02:51 AM Backport #46287 (Rejected): nautilus: mon: log entry with garbage generated by bad memory access
Patrick Donnelly
02:51 AM Backport #46286 (Resolved): octopus: mon: log entry with garbage generated by bad memory access
https://github.com/ceph/ceph/pull/36035 Patrick Donnelly

06/30/2020

09:27 PM Bug #46222 (Won't Fix): Cbt installation task for cosbench fails.
The root cause of this issue is that we put an older version of cosbench in https://drop.ceph.com/qa/ after the recen... Neha Ojha
01:07 PM Bug #46222: Cbt installation task for cosbench fails.

http://qa-proxy.ceph.com/teuthology/ideepika-2020-06-29_08:23:54-rados-wip-deepika-testing-2020-06-25-2058-distro-b...
Deepika Upadhyay
05:37 PM Bug #46216 (Pending Backport): mon: log entry with garbage generated by bad memory access
Patrick Donnelly
04:41 PM Bug #46216 (Fix Under Review): mon: log entry with garbage generated by bad memory access
Neha Ojha
04:23 PM Documentation #46279 (New): various matters related to ceph mon and orch cephadm -- this is sever...
<andyg5> Hi, I am trying to move the MONitors over tothe public network, and I'm not sure how to do it. I have setu... Zac Dover
03:07 PM Bug #46224 (Resolved): Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
Neha Ojha
01:52 PM Bug #46275 (Resolved): Cancellation of on-going scrubs
Although it's possible to prevent initiating new scrubs, we don't have a facility for terminating already on-going on... Radoslaw Zarzynski
08:30 AM Bug #46264: mon: check for mismatched daemon versions
Hm. what do yo expect? Upgrade scenarios can become complicated with more than two versions running at the same time ... Sebastian Wagner

06/29/2020

09:22 PM Bug #46266 (Need More Info): Monitor crashed in creating pool in CrushTester::test_with_fork()
Hi. I was creating a new pool and one of my monitors crashed.... Seena Fallah
06:44 PM Bug #43553: mon: client mon_status fails
/ceph/teuthology-archive/yuriw-2020-06-25_22:31:00-fs-octopus-distro-basic-smithi/5180260/teuthology.log Patrick Donnelly
06:10 PM Bug #46264 (Resolved): mon: check for mismatched daemon versions
There is currently no test to check if the daemon are all running the same version of ceph Tyler Sheehan
05:44 PM Bug #20960: ceph_test_rados: mismatched version (due to pg import/export)
/a/dis-2020-06-28_18:43:20-rados-wip-msgr21-fix-reuse-rebuildci-distro-basic-smithi/5186890 Neha Ojha
05:36 PM Bug #45761: mon_thrasher: "Error ENXIO: mon unavailable" during sync_force command leads to "fail...
/a/dis-2020-06-28_18:43:20-rados-wip-msgr21-fix-reuse-rebuildci-distro-basic-smithi/5186759 Neha Ojha
05:02 PM Backport #46262 (Resolved): nautilus: larger osd_scrub_max_preemptions values cause Floating poin...
https://github.com/ceph/ceph/pull/37470 Nathan Cutler
05:01 PM Backport #46261 (Resolved): octopus: larger osd_scrub_max_preemptions values cause Floating point...
https://github.com/ceph/ceph/pull/36034 Nathan Cutler
12:26 PM Bug #46178: slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirec...
https://pulpito.ceph.com/swagner-2020-06-29_09:26:42-rados:cephadm-wip-swagner-testing-2020-06-26-1524-distro-basic-s... Sebastian Wagner
08:53 AM Bug #44352: pool listings are slow after deleting objects
This was on the latest nautilus release at the time, the DB should have been on SSD but I don't remember. But good po... Paul Emmerich
08:50 AM Bug #45381: unfound objects in erasure-coded CephFS
No, this setup is luckily without any cache tiering. It's a completely standard setup with replicated cephfs_metadata... Paul Emmerich

06/28/2020

10:45 AM Bug #46180 (Fix Under Review): qa: Scrubbing terminated -- not all pgs were active and clean.
Ilya Dryomov
05:17 AM Bug #46024 (Pending Backport): larger osd_scrub_max_preemptions values cause Floating point excep...
xie xingguo

06/27/2020

04:20 PM Bug #46242 (New): rados -p default.rgw.buckets.data returning over millions objects No such file ...
Hi Dev,
Due sharding / s3 bugs we synced the bucket of customer to new ones.
Once we tryed to delete we're unab...
Manuel Rios
03:15 PM Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
/a/kchai-2020-06-27_07:37:00-rados-wip-kefu-testing-2020-06-27-1407-distro-basic-smithi/5183671/ Kefu Chai
08:25 AM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
... Kefu Chai

06/26/2020

07:27 PM Bug #46178: slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirec...
http://pulpito.ceph.com/mgfritch-2020-06-26_02:07:27-rados-wip-mgfritch-testing-2020-06-25-1855-distro-basic-smithi/... Michael Fritch
06:21 PM Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.
Here's a reliable reproducer for the issue:
-s rados/singleton-nomsgr -c master --filter 'all/health-warnings rado...
Neha Ojha
06:50 AM Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.
I think it has to do with reconnect handling and how connections are reused.
This part of ProtocolV2 is pretty fra...
Ilya Dryomov
05:04 AM Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.
This is a msgr2.1 issue.
Ilya Dryomov
05:48 PM Bug #46225 (Triaged): Health check failed: 1 osds down (OSD_DOWN)
Neha Ojha
05:39 PM Bug #46225: Health check failed: 1 osds down (OSD_DOWN)
Also, related to https://tracker.ceph.com/issues/46180... Neha Ojha
10:57 AM Bug #46225 (Duplicate): Health check failed: 1 osds down (OSD_DOWN)
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/5176410
2020-06-2...
Sridhar Seshasayee
05:34 PM Bug #46227 (Duplicate): Segmentation fault when running ceph_test_keyvaluedb command as part of a...
Duplicate of https://tracker.ceph.com/issues/46054 Neha Ojha
11:19 AM Bug #46227 (Duplicate): Segmentation fault when running ceph_test_keyvaluedb command as part of a...
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/5176446
Unfortunate...
Sridhar Seshasayee
05:31 PM Bug #46179 (Triaged): Health check failed: Reduced data availability: PG_AVAILABILITY
Neha Ojha
05:11 PM Bug #46179: Health check failed: Reduced data availability: PG_AVAILABILITY
This failure is different from the one seen in the RGW suite earlier due to upmap. This is related to https://tracker... Neha Ojha
07:32 AM Bug #46179: Health check failed: Reduced data availability: PG_AVAILABILITY
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/
job ID: 5176200
F...
Sridhar Seshasayee
05:31 PM Bug #46224 (Fix Under Review): Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
Neha Ojha
10:44 AM Bug #46224 (Resolved): Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/5176341 and
/a/ssesha...
Sridhar Seshasayee
05:30 PM Bug #46222 (In Progress): Cbt installation task for cosbench fails.
Neha Ojha
09:03 AM Bug #46222: Cbt installation task for cosbench fails.
See /a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/5176322 as well Sridhar Seshasayee
09:00 AM Bug #46222 (Won't Fix): Cbt installation task for cosbench fails.
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/5176309
2020-06-2...
Sridhar Seshasayee
04:48 PM Feature #46238 (New): raise a HEALTH warn, if OSDs use the cluster_network for the front
Related to: https://tracker.ceph.com/issues/46230 Michal Nasiadka
12:17 PM Backport #46229 (In Progress): octopus: Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee
12:14 PM Backport #46229 (New): octopus: Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee
11:48 AM Backport #46229 (Resolved): octopus: Ceph Monitor heartbeat grace period does not reset.
https://github.com/ceph/ceph/pull/35799 Sridhar Seshasayee
12:13 PM Backport #46228 (In Progress): nautilus: Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee
12:13 PM Backport #46228 (New): nautilus: Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee
11:47 AM Backport #46228 (Resolved): nautilus: Ceph Monitor heartbeat grace period does not reset.
https://github.com/ceph/ceph/pull/35798 Sridhar Seshasayee
11:43 AM Bug #45943 (Pending Backport): Ceph Monitor heartbeat grace period does not reset.
Sridhar Seshasayee
11:14 AM Documentation #46203 (Resolved): docs.ceph.com is down
docs.ceph.com returned four hours later. Zac Dover
08:40 AM Bug #24057: cbt fails to copy results to the archive dir
Observed the issue during this run:
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distr...
Sridhar Seshasayee
07:28 AM Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/
job ID: 5176184
...
Sridhar Seshasayee
07:18 AM Bug #45441: rados: Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log'
Observing the issue during this run:
/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-dist...
Sridhar Seshasayee
04:38 AM Bug #46125: ceph mon memory increasing
I will try with default settings for the monitor. With current config file parameters, the monitor is using 1GB.
I...
Ashish Nagar

06/25/2020

11:56 PM Bug #46216 (Resolved): mon: log entry with garbage generated by bad memory access
Causes the mgr to segmentation fault:... Patrick Donnelly
10:27 PM Bug #46178: slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirec...
/a/yuvalif-2020-06-23_14:40:15-rgw-wip-yuval-test-35331-35155-distro-basic-smithi/5173465
Seems very likely to hav...
Neha Ojha
09:09 PM Bug #46125 (Need More Info): ceph mon memory increasing
Can you try with the default settings for the monitor? What level of memory usage are you seeing exactly?
There is...
Josh Durgin
07:40 PM Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.
The common thing in all of these is that the tests are all failing while running the ceph task, no thrashing or anyth... Neha Ojha
03:15 PM Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.
Saw the same error during this run:
http://pulpito.ceph.com/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-...
Sridhar Seshasayee
05:29 PM Bug #46211 (Duplicate): qa: pools stuck in creating
Patrick Donnelly
05:26 PM Bug #46211 (Duplicate): qa: pools stuck in creating
During cluster setup for the CephFS suites, we see this failure:... Patrick Donnelly
03:44 PM Bug #39039: mon connection reset, command not resent
Hitting this issue on octopus, Fedora 32:... Sunny Kumar
02:18 PM Documentation #46203 (In Progress): docs.ceph.com is down
I'm afraid this is outside my control. We're at the mercy of our cloud provider. Pretty sure it's this: http://trav... David Galloway
07:49 AM Documentation #46203 (Resolved): docs.ceph.com is down
docs.ceph.com has been down since at the latest 1735 aest 25 Jun 2020.
https://downforeveryoneorjustme.com/docs.ce...
Zac Dover

06/24/2020

02:16 PM Bug #46180 (Resolved): qa: Scrubbing terminated -- not all pgs were active and clean.
Seeing several test failures in the rgw suite:... Casey Bodley
02:09 PM Bug #46179 (Duplicate): Health check failed: Reduced data availability: PG_AVAILABILITY
multiple RGW tests are failing on different branches, with:... Casey Bodley
01:20 PM Bug #46178: slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirec...
http://pulpito.ceph.com/swagner-2020-06-24_11:30:44-rados:cephadm-wip-swagner3-testing-2020-06-24-1025-distro-basic-s... Sebastian Wagner
01:19 PM Bug #46178: slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirec...
http://pulpito.ceph.com/swagner-2020-06-24_11:30:44-rados:cephadm-wip-swagner3-testing-2020-06-24-1025-distro-basic-s... Sebastian Wagner
01:16 PM Bug #46178: slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirec...
http://pulpito.ceph.com/swagner-2020-06-24_11:30:44-rados:cephadm-wip-swagner3-testing-2020-06-24-1025-distro-basic-s... Sebastian Wagner
12:57 PM Bug #46178 (Duplicate): slow request osd_op(... (undecoded) ondisk+retry+read+ignore_overlay+know...
Saw this error yesterday for the first time:
http://pulpito.ceph.com/swagner-2020-06-23_13:15:09-rados:cephadm-wip...
Sebastian Wagner
10:37 AM Backport #45676 (Resolved): octopus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen ...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35236
m...
Nathan Cutler
02:47 AM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
... Kefu Chai
01:36 AM Backport #46164 (In Progress): nautilus: osd: make message cap option usable again
Neha Ojha
01:13 AM Backport #46164 (Resolved): nautilus: osd: make message cap option usable again
https://github.com/ceph/ceph/pull/35738 Neha Ojha
01:28 AM Backport #46165 (In Progress): octopus: osd: make message cap option usable again
Neha Ojha
01:13 AM Backport #46165 (Resolved): octopus: osd: make message cap option usable again
https://github.com/ceph/ceph/pull/35737 Neha Ojha
12:18 AM Bug #46143 (Pending Backport): osd: make message cap option usable again
Neha Ojha

06/23/2020

08:11 PM Backport #45676: octopus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus)
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35236
merged
Yuri Weinstein
12:15 AM Bug #45944: osd/osd-markdown.sh: TEST_osd_stop failed
... Neha Ojha

06/22/2020

09:59 PM Backport #46115 (In Progress): octopus: Add statfs output to ceph-objectstore-tool
David Zafman
09:37 PM Backport #46116 (In Progress): nautilus: Add statfs output to ceph-objectstore-tool
David Zafman
06:52 PM Bug #45944: osd/osd-markdown.sh: TEST_osd_stop failed
/a/teuthology-2020-06-19_07:01:02-rados-master-distro-basic-smithi/5164221 Neha Ojha
05:53 PM Bug #46143 (Fix Under Review): osd: make message cap option usable again
Neha Ojha
05:36 PM Bug #46143 (In Progress): osd: make message cap option usable again
Neha Ojha
05:18 PM Bug #46143 (Resolved): osd: make message cap option usable again
"This reverts commit 45d5ac3.
Without a msg throttler, we can't change osd_client_message_cap cap
online. The thr...
Neha Ojha
04:57 PM Bug #41154: osd: pg unknown state
I again have this problem.... Alexander Kazansky
03:19 PM Documentation #46141 (New): Document automatic OSD deployment behavior better
Make certain that the documentation notifies readers that OSDs are automatically created, so that they are not caught... Zac Dover
09:12 AM Bug #46137: Monitor leader is marking multiple osd's down
Every few mins multiple osd's are going down and coming back up which is causing recovery of data, This is occurring ... Prayank Saxena
09:07 AM Bug #46137 (New): Monitor leader is marking multiple osd's down
My ceph cluster consist of 5 Mon and 58 DN with 1302 total osd's (HDD's) with 12.2.8 Luminous (stable) version and Fi... Prayank Saxena
06:02 AM Bug #45943: Ceph Monitor heartbeat grace period does not reset.
Updates from testing the fix:
OSD failure before being marked down:...
Sridhar Seshasayee

06/21/2020

02:17 PM Feature #24099: osd: Improve workflow when creating OSD on raw block device if there was bluestor...
John Spray wrote:
> This seems like an odd idea -- if someone is doing OSD creation by hand, why would they want to ...
Niklas Hambuechen
12:25 PM Documentation #46099: document statfs operation for ceph-objectstore-tool

if (op == "statfs") {
store_statfs_t statsbuf;
ret = fs->statfs(&statsbuf);
if (ret < 0) {
...
Zac Dover
12:10 PM Documentation #46126 (New): RGW docs lack an explanation of how permissions management works, esp...
<dirtwash> you know its sshitty protocol and design if obvious things arent visible and default behavior doesnt work
...
Zac Dover
08:02 AM Bug #46125: ceph mon memory increasing
Hi,
I have deployed ceph single node cluster.
ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) na...
Ashish Nagar
07:13 AM Bug #46125 (Need More Info): ceph mon memory increasing
Hi,
I have deployed ceph single node cluster.
ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) ...
Ashish Nagar

06/20/2020

10:12 PM Backport #46096 (In Progress): nautilus: Issue health status warning if num_shards_repaired excee...
Nathan Cutler
10:09 PM Backport #46095 (In Progress): octopus: Issue health status warning if num_shards_repaired exceed...
Nathan Cutler
09:57 PM Bug #45793 (Resolved): Objecter: don't attempt to read from non-primary on EC pools
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:56 PM Backport #45882 (Resolved): octopus: Objecter: don't attempt to read from non-primary on EC pools
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35444
m...
Nathan Cutler
09:56 PM Backport #45775 (Resolved): octopus: build_incremental_map_msg missing incremental map while snap...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35442
m...
Nathan Cutler
07:59 AM Documentation #46120 (Resolved): Improve ceph-objectstore-tool documentation
https://github.com/ceph/ceph/pull/33823
There are a number of comments by David Zafman that I failed to include in...
Zac Dover
04:20 AM Bug #46065 (Resolved): sudo missing from command in monitor-bootstrapping procedure
Zac Dover

06/19/2020

04:36 PM Backport #46116 (Resolved): nautilus: Add statfs output to ceph-objectstore-tool
https://github.com/ceph/ceph/pull/35713 Nathan Cutler
04:36 PM Backport #46115 (Resolved): octopus: Add statfs output to ceph-objectstore-tool
https://github.com/ceph/ceph/pull/35715 Nathan Cutler
05:00 AM Documentation #46099 (New): document statfs operation for ceph-objectstore-tool
https://github.com/ceph/ceph/pull/35632
https://github.com/ceph/ceph/pull/33823
The affected file (I think) is ...
Zac Dover

06/18/2020

11:26 PM Bug #46064 (Pending Backport): Add statfs output to ceph-objectstore-tool
David Zafman
01:13 AM Bug #46064 (Fix Under Review): Add statfs output to ceph-objectstore-tool
David Zafman
01:08 AM Bug #46064 (In Progress): Add statfs output to ceph-objectstore-tool
David Zafman
01:07 AM Bug #46064 (Resolved): Add statfs output to ceph-objectstore-tool

This will help diagnose out of space crashes:...
David Zafman
10:32 PM Backport #45882: octopus: Objecter: don't attempt to read from non-primary on EC pools
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35444
merged
Yuri Weinstein
10:31 PM Backport #45775: octopus: build_incremental_map_msg missing incremental map while snaptrim or bac...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35442
merged
Yuri Weinstein
08:08 PM Backport #46096 (Resolved): nautilus: Issue health status warning if num_shards_repaired exceeds ...
https://github.com/ceph/ceph/pull/36379 Patrick Donnelly
08:08 PM Backport #46095 (Resolved): octopus: Issue health status warning if num_shards_repaired exceeds s...
https://github.com/ceph/ceph/pull/35685 Patrick Donnelly
08:06 PM Backport #46090 (Resolved): nautilus: PG merge: FAILED ceph_assert(info.history.same_interval_sin...
https://github.com/ceph/ceph/pull/36161 Patrick Donnelly
08:06 PM Backport #46089 (Resolved): octopus: PG merge: FAILED ceph_assert(info.history.same_interval_sinc...
https://github.com/ceph/ceph/pull/36033 Patrick Donnelly
08:06 PM Backport #46086 (Resolved): octopus: osd: wakeup all threads of shard rather than one thread
https://github.com/ceph/ceph/pull/36032 Patrick Donnelly
10:40 AM Bug #46071 (New): potential rocksdb failure: few osd's service not starting up after node reboot....
Data node went down abruptly due to issue with SPS-BD Smart Array PCIe SAS Expander, once hardware was changed node c... Prayank Saxena
03:30 AM Bug #46065 (Fix Under Review): sudo missing from command in monitor-bootstrapping procedure
https://github.com/ceph/ceph/pull/35635 Zac Dover
03:25 AM Bug #46065 (Resolved): sudo missing from command in monitor-bootstrapping procedure
Where:
https://docs.ceph.com/docs/master/install/manual-deployment/#monitor-bootstrapping
What:
<badone> https:/...
Zac Dover
 

Also available in: Atom