Nathan Cutler's activity
From 09/20/2017 to 10/19/2017
10/19/2017
- 09:47 PM RADOS Bug #21204 (Resolved): DNS SRV default service name not used anymore
- 09:43 PM RADOS Bug #21365 (Resolved): Daemons(OSD, Mon...) exit abnormally at injectargs command
- 09:43 PM mgr Fix #21292 (Resolved): Quieten scary RuntimeError from restful module on startup
- 09:42 PM Ceph Bug #21294 (Resolved): ceph_manager: bad AssertionError: failed to recover before timeout expired
- 09:42 PM Ceph Backport #21548 (Resolved): luminous: ceph_manager: bad AssertionError: failed to recover before ...
- 12:15 PM RADOS Backport #21783 (In Progress): luminous: cli/crushtools/build.t sometimes fails in jenkins' "make...
10/18/2017
10/17/2017
10/16/2017
- 08:03 AM Ceph Backport #21780 (Duplicate): xenial 16.04/ after jewel -> luminous upgrade Failed to stop ceph.t...
- Duplicate of #21478
10/15/2017
- 09:37 PM rgw Bug #21148 (Resolved): rgw: reversed account listing of Swift API should be supported
- 09:37 PM rgw Backport #21445 (Resolved): luminous: rgw: reversed account listing of Swift API should be supported
- 09:36 PM rgw Bug #17935 (Resolved): rgw: wrong error code is returned when putting container metadata with too...
- 09:36 PM rgw Backport #21459 (Resolved): luminous: rgw: wrong error code is returned when putting container me...
- 09:36 PM rgw Bug #17934 (Resolved): rgw: /info lacks swift.max_meta_count
- 09:35 PM rgw Backport #21458 (Resolved): luminous: rgw: /info lacks swift.max_meta_count
- 09:35 PM rgw Bug #17936 (Resolved): rgw: /info lacks swift.max_meta_value_length
- 09:35 PM rgw Backport #21457 (Resolved): luminous: rgw: /info lacks swift.max_meta_value_length
- 09:34 PM rgw Bug #17938 (Resolved): rgw: wrong error message is returned when putting container with a name th...
- 09:34 PM rgw Backport #21456 (Resolved): luminous: rgw: wrong error message is returned when putting container...
10/14/2017
- 10:25 AM rbd Backport #20637 (Need More Info): jewel: rbd-mirror: cluster watcher should ignore -EPERM errors ...
- non-trivial
- 10:12 AM rgw Backport #21271 (Need More Info): jewel: rgw: shadow objects are sometimes not removed
- non-trivial
- 09:49 AM rgw Backport #21347 (Need More Info): jewel: Orphan data gets leaked on Bucket deletion
- non-trivial
- 09:38 AM rgw Backport #21447 (In Progress): jewel: rgw:multisite: Get bucket location which is located in anot...
- 09:27 AM rgw Backport #21454 (Need More Info): jewel: rgw: end_marker parameter doesn't work on Swift containe...
- non-trivial
- 09:24 AM rgw Backport #21546 (In Progress): jewel: rgw file write error
- 09:22 AM rgw Backport #21632 (In Progress): jewel: remove region from "INSTALL CEPH OBJECT GATEWAY"
- 09:20 AM rgw Documentation #21610: remove region from "INSTALL CEPH OBJECT GATEWAY"
- Nathan Cutler wrote:
> @Orit: Is the kraken backport needed for upgrade tests to pass?
Nevermind - I see now it's... - 09:20 AM rgw Backport #21630 (Rejected): kraken: remove region from "INSTALL CEPH OBJECT GATEWAY"
- 09:19 AM rgw Backport #21791 (Need More Info): jewel: RGW: Multipart upload may double the quota
- non-trivial backport; needs RGW developer
- 09:18 AM rgw Bug #19602: Multipart upload may exceed the quota
- https://github.com/ceph/ceph/pull/12010
- 08:17 AM CephFS Backport #21805 (In Progress): luminous: client_metadata can be missing
- 08:16 AM CephFS Backport #21804 (In Progress): luminous: limit internal memory usage of object cacher.
- 06:55 AM rgw Backport #21453 (Resolved): luminous: rgw: end_marker parameter doesn't work on Swift container's...
10/13/2017
- 07:06 PM Ceph Backport #21780: xenial 16.04/ after jewel -> luminous upgrade Failed to stop ceph.target: Trans...
- Vasu Kulkarni wrote:
> we restart the services so reboot is not required
I meant that as a question, i.e. can you... - 09:10 AM Ceph Backport #21780 (In Progress): xenial 16.04/ after jewel -> luminous upgrade Failed to stop ceph...
- jewel backport PR: https://github.com/ceph/ceph/pull/18290
- 09:05 AM Ceph Backport #21780: xenial 16.04/ after jewel -> luminous upgrade Failed to stop ceph.target: Trans...
- Note: this was fixed in master by https://github.com/ceph/ceph/pull/15835 (included in luminous v12.2.0 release)
F... - 04:54 PM Ceph Feature #21762: Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and cep...
- Then it (ceph-kvstore-tool) will be included with ceph-mds and ceph-mgr as well, but I guess that's not a problem. Up...
- 08:40 AM Ceph Feature #21762 (Fix Under Review): Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool ...
- https://github.com/ceph/ceph/pull/18289
https://github.com/ceph/ceph/pull/18474 - 06:28 AM Ceph Feature #21762: Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and cep...
- > ceph-kvstore-tool in ceph-mon and ceph-osd
Oh, wait. This is unfortunately not possible. A single file (ceph-kvs... - 06:25 AM Ceph Feature #21762: Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and cep...
- > Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and ceph-osd, and ceph-osdomap-tool in ce...
- 06:25 AM Ceph Feature #21762: Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and cep...
- Vikhyat Umrao wrote:
> If these daemon packages are becoming heavier we can create a new package ceph-tools where we... - 12:18 PM Ceph Backport #21796 (In Progress): jewel: Ubuntu amd64 client can not discover the ubuntu arm64 ceph ...
- 12:14 PM Ceph Backport #21796 (Resolved): jewel: Ubuntu amd64 client can not discover the ubuntu arm64 ceph clu...
- https://github.com/ceph/ceph/pull/18294
- 12:17 PM Ceph Backport #21795 (In Progress): luminous: Ubuntu amd64 client can not discover the ubuntu arm64 ce...
- 12:14 PM Ceph Backport #21795 (Resolved): luminous: Ubuntu amd64 client can not discover the ubuntu arm64 ceph ...
- https://github.com/ceph/ceph/pull/18293
- 12:13 PM RADOS Backport #21794 (Resolved): luminous: backoff causes out of order op
- 12:13 PM rbd Backport #21793 (Resolved): luminous: [rbd-mirror] primary image should register in remote, non-p...
- https://github.com/ceph/ceph/pull/20207
- 12:13 PM rgw Backport #21792 (Resolved): luminous: encryption: reject requests that don't provide all expected...
- https://github.com/ceph/ceph/pull/18429
- 12:13 PM rgw Backport #21791 (Resolved): jewel: RGW: Multipart upload may double the quota
- https://github.com/ceph/ceph/pull/18121
- 12:13 PM rgw Backport #21790 (Resolved): luminous: RGW: Multipart upload may double the quota
- https://github.com/ceph/ceph/pull/18435
- 12:13 PM rgw Backport #21789 (Resolved): luminous: user creation can overwrite existing user even if different...
- https://github.com/ceph/ceph/pull/18436
- 12:13 PM rbd Backport #21788 (Resolved): luminous: [journal] image-meta set event should refresh the image aft...
- https://github.com/ceph/ceph/pull/19485
- 12:13 PM Ceph Backport #21787 (Resolved): luminous: "Error EINVAL: invalid command" in upgrade:jewel-x-luminous...
- https://github.com/ceph/ceph/pull/18516
- 12:13 PM RADOS Backport #21786 (Resolved): jewel: OSDMap cache assert on shutdown
- https://github.com/ceph/ceph/pull/21184
- 12:13 PM RADOS Backport #21785 (Resolved): luminous: OSDMap cache assert on shutdown
- 12:13 PM RADOS Backport #21784 (Resolved): jewel: cli/crushtools/build.t sometimes fails in jenkins' "make check...
- https://github.com/ceph/ceph/pull/21158
- 12:12 PM RADOS Backport #21783 (Resolved): luminous: cli/crushtools/build.t sometimes fails in jenkins' "make ch...
- https://github.com/ceph/ceph/pull/18398
- 12:12 PM rbd Backport #21782 (Resolved): luminous: [journal] possible infinite loop within journal:tag_list cl...
- https://github.com/ceph/ceph/pull/18417
10/12/2017
- 01:09 AM Ceph Feature #21762: Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and cep...
- I'm not sure this is a good idea? My impression was that the binaries shipped in the ceph-test RPM are primarily used...
- 12:51 AM mgr Backport #21699 (In Progress): luminous: mgr status module uses base 10 units
- 12:46 AM RADOS Bug #21716: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- Kefu, thanks for fixing this. Can you also indicate which of the mentioned PRs need to be backported to fix the test ...
- 12:44 AM Ceph Backport #21755: RADOS: Get "valid pg state" error at ceph pg ls commands
- See original for the bug description.
- 12:15 AM Ceph Bug #21728 (Resolved): ceph-disk: retry on OSError
10/10/2017
- 06:24 PM Stable releases Tasks #19014 (Rejected): hammer v0.94.11
- hammer is EOL
- 07:03 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. Upgrade client-upgrade...
- 07:03 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. ceph-disk...
- 07:03 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. rgw...
- 07:03 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. fs...
- 07:02 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. Upgrade hammer-x ...
- 07:02 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. Upgrade jewel point-to-point-x...
- 07:02 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. powercycle...
- 07:01 AM Stable releases Tasks #21742: jewel v10.2.11
- h3. rados...
- 06:57 AM Stable releases Tasks #21742 (In Progress): jewel v10.2.11
- https://shaman.ceph.com/builds/ceph/wip-jewel-backports/7c4c04a59c15d8e53f2baed03dd8f67743d1d847/...
- 06:56 AM Stable releases Tasks #21742 (Resolved): jewel v10.2.11
- h3. Workflow
* "Preparing the release":http://ceph.com/docs/master/dev/development-workflow/#preparing-a-new-relea... - 06:45 AM Ceph Bug #21505: filestore rocksdb compaction readahead
- Though master PR was not yet merged, jewel backport PR was opened at https://github.com/ceph/ceph/pull/17898
- 06:44 AM Ceph Bug #21505 (Fix Under Review): filestore rocksdb compaction readahead
10/09/2017
- 08:53 PM rgw Bug #21723 (Fix Under Review): rgw: radosgw-admin reshard command argument error.
- 08:44 PM RADOS Feature #18206 (Resolved): osd: osd_scrub_during_recovery only considers primary, not replicas
- 08:43 PM RADOS Backport #21117 (Resolved): jewel: osd: osd_scrub_during_recovery only considers primary, not rep...
- 08:43 PM Ceph Revision 066716cd (ceph): Merge pull request #17815 from dzafman/wip-21117
- jewel: osd: osd_scrub_during_recovery only considers primary, not replicas
Reviewed-by: Josh Durgin <jdurgin@redhat.... - 12:27 PM Ceph Backport #21729 (In Progress): luminous: ceph-disk: retry on OSError
- 12:07 PM Ceph Backport #21729 (Resolved): luminous: ceph-disk: retry on OSError
- https://github.com/ceph/ceph/pull/18189
- 12:26 PM Ceph Backport #21730 (Resolved): jewel: ceph-disk: retry on OSError
- 12:08 PM Ceph Backport #21730 (In Progress): jewel: ceph-disk: retry on OSError
- 12:07 PM Ceph Backport #21730 (Resolved): jewel: ceph-disk: retry on OSError
- https://github.com/ceph/ceph/pull/18169
- 12:07 PM Ceph Backport #21732 (Resolved): luminous: "cluster [WRN] Health check failed: noup flag(s) set (OSDMA...
- https://github.com/ceph/ceph/pull/18410
- 12:07 PM Ceph Backport #21731 (Rejected): luminous: ceph_test_objectstore fails ObjectStore/StoreTest.Synthetic...
- 12:06 PM Ceph Bug #21728 (Resolved): ceph-disk: retry on OSError
- master PR: https://github.com/ceph/ceph/pull/18162
10/06/2017
- 08:18 PM RADOS Bug #20416: "FAILED assert(osdmap->test_flag((1<<15)))" (sortbitwise) on upgraded cluster
- fast-tracking the backport, since it's already open
- 06:33 PM Stable releases Tasks #20613 (Resolved): jewel v10.2.10
- 05:15 PM RADOS Bug #21660: Kraken client crash after upgrading cluster from Kraken to Luminous
- @Yuri, @Sage - I guess the upgrade/kraken-x suite did not catch this because it does not do "/usr/bin/rbd ls" ?
- 03:22 AM RADOS Backport #21692 (In Progress): luminous: Kraken client crash after upgrading cluster from Kraken ...
- 03:18 AM RADOS Backport #21692 (Resolved): luminous: Kraken client crash after upgrading cluster from Kraken to ...
- https://github.com/ceph/ceph/pull/18140
- 03:21 AM RADOS Backport #21702 (Resolved): luminous: BlueStore::umount will crash when the BlueStore is opened b...
- https://github.com/ceph/ceph/pull/18750
- 03:21 AM RADOS Backport #21701 (Resolved): luminous: ceph-kvstore-tool does not call bluestore's umount when exit
- https://github.com/ceph/ceph/pull/18751
- 03:21 AM RADOS Bug #21625: ceph-kvstore-tool does not call bluestore's umount when exit
- https://github.com/ceph/ceph/pull/18083
- 03:20 AM RADOS Bug #21624: BlueStore::umount will crash when the BlueStore is opened by start_kv_only()
- https://github.com/ceph/ceph/pull/18082
- 03:19 AM rbd Backport #21700 (Resolved): luminous: rbd-mirror: Allow a different data-pool to be used on the s...
- https://github.com/ceph/ceph/pull/19305
- 03:19 AM mgr Backport #21699 (Resolved): luminous: mgr status module uses base 10 units
- https://github.com/ceph/ceph/pull/18257
- 03:19 AM rgw Backport #21698 (Resolved): luminous: radosgw-admin usage show loops indefinitly
- https://github.com/ceph/ceph/pull/18437
- 03:18 AM RADOS Backport #21697 (Resolved): luminous: OSDService::recovery_need_sleep read+updated without locking
- https://github.com/ceph/ceph/pull/18753
- 03:18 AM rgw Backport #21696 (Resolved): luminous: fix a bug about inconsistent unit of comparison
- https://github.com/ceph/ceph/pull/18438
- 03:18 AM rgw Backport #21695 (Resolved): luminous: failed CompleteMultipartUpload request does not release lock
- https://github.com/ceph/ceph/pull/18430
- 03:18 AM rbd Backport #21694 (Resolved): luminous: compare-and-write -EILSEQ failures should be filtered when ...
- https://github.com/ceph/ceph/pull/20206
- 03:18 AM RADOS Backport #21693 (Resolved): luminous: interval_map.h: 161: FAILED assert(len > 0)
- https://github.com/ceph/ceph/pull/18413
- 03:18 AM rbd Backport #21691 (Resolved): jewel: [qa] rbd_mirror_helpers.sh request_resync_image function saves...
- https://github.com/ceph/ceph/pull/19804
- 03:18 AM rbd Backport #21690 (Resolved): luminous: [qa] rbd_mirror_helpers.sh request_resync_image function sa...
- https://github.com/ceph/ceph/pull/19802
- 03:18 AM rbd Backport #21689 (Resolved): jewel: Possible deadlock in 'list_children' when refresh is required
- https://github.com/ceph/ceph/pull/21224
- 03:18 AM rbd Backport #21688 (Resolved): luminous: Possible deadlock in 'list_children' when refresh is required
- https://github.com/ceph/ceph/pull/18564
10/04/2017
- 08:40 AM Ceph Bug #21665: ceph -v gives luminous (stable) - should it be luminous (LTS)?
- LTS means Long Term Stable. The odd-numbered releases are just "Stable". In other words, both are "stable" - the only...
- 08:38 AM Ceph Bug #21665 (Need More Info): ceph -v gives luminous (stable) - should it be luminous (LTS)?
- 08:32 AM Ceph Bug #21665: ceph -v gives luminous (stable) - should it be luminous (LTS)?
- Does "ceph -v" return "LTS" in any of the other LTS releases, i.e. jewel, hammer?
10/03/2017
- 03:23 AM rgw Documentation #21610: remove region from "INSTALL CEPH OBJECT GATEWAY"
- @Orit: Is the kraken backport needed for upgrade tests to pass?
- 02:59 AM mgr Backport #21659 (Resolved): luminous: Crash in get_metadata_python during MDS restart
- https://github.com/ceph/ceph/pull/18412
- 02:59 AM CephFS Backport #21658 (Resolved): luminous: purge queue and standby replay mds
- https://github.com/ceph/ceph/pull/18385
- 02:58 AM CephFS Backport #21657 (Resolved): luminous: StrayManager::truncate is broken
- https://github.com/ceph/ceph/pull/18019
- 02:58 AM mgr Backport #21656 (Resolved): luminous: crash on DaemonPerfCounters::update
- https://github.com/ceph/ceph/pull/18675
- 02:58 AM rgw Backport #21655 (Resolved): luminous: expose --sync-stats via admin api
- https://github.com/ceph/ceph/pull/18439
- 02:58 AM RADOS Backport #21653 (Resolved): luminous: Erasure code recovery should send additional reads if neces...
- https://github.com/ceph/ceph/pull/20081
With http://tracker.ceph.com/issues/22069 - 02:58 AM rgw Backport #21652 (Resolved): luminous: policy checks missing from Get/SetRequestPayment operations
- https://github.com/ceph/ceph/pull/18440
- 02:58 AM rgw Backport #21651 (Resolved): luminous: rgw: avoid logging keystone revocation failures when no key...
- https://github.com/ceph/ceph/pull/18441
- 02:58 AM RADOS Backport #21650 (Resolved): luminous: buffer_anon leak during deep scrub (on otherwise idle osd)
- https://github.com/ceph/ceph/pull/18227
- 02:58 AM rgw Backport #21649 (Resolved): luminous: multisite: sync of bucket entrypoints fail with ENOENT
- 02:58 AM mgr Backport #21648 (Resolved): luminous: mgr[zabbix] float division by zero
- https://github.com/ceph/ceph/pull/18734
- 02:58 AM Ceph Backport #21647 (Resolved): luminous: "mgr_command_descs" not sync in the new join Mon
- https://github.com/ceph/ceph/pull/18620
- 02:58 AM rbd Backport #21646 (Resolved): luminous: Image-meta should be dynamically refreshed
- https://github.com/ceph/ceph/pull/19447
- 02:57 AM Ceph Backport #21645 (Resolved): luminous: Incomplete/missing get_store_prefixes implementations in OS...
- https://github.com/ceph/ceph/pull/18621
- 02:57 AM rbd Backport #21644 (Resolved): luminous: [rbd-mirror] image-meta is not replicated as part of initia...
- https://github.com/ceph/ceph/pull/19484
- 02:57 AM Ceph Backport #21643 (Resolved): luminous: upmap does not respect osd reweights
- 02:57 AM rbd Backport #21642 (Resolved): jewel: rbd ls -l crashes with SIGABRT
- https://github.com/ceph/ceph/pull/19801
- 02:57 AM rbd Backport #21641 (Resolved): luminous: rbd ls -l crashes with SIGABRT
- https://github.com/ceph/ceph/pull/19800
- 02:57 AM rbd Backport #21640 (Resolved): luminous: [rbd-mirror] resync isn't properly deleting non-primary image
- https://github.com/ceph/ceph/pull/18337
- 02:57 AM rbd Backport #21639 (Resolved): luminous: rbd does not delete snaps in (ec) data pool
- https://github.com/ceph/ceph/pull/18336
- 02:57 AM mgr Backport #21638 (Resolved): luminous: dashboard OSD list has servers and osds in arbitrary order
- 02:57 AM rgw Backport #21637 (Resolved): luminous: encryption: PutObj response does not include sse-kms headers
- https://github.com/ceph/ceph/pull/18442
- 02:57 AM RADOS Backport #21636 (Resolved): luminous: ceph-monstore-tool --readable mode doesn't understand FSMap...
- https://github.com/ceph/ceph/pull/18754
- 02:57 AM rgw Backport #21635 (Resolved): luminous: s3:GetBucketCORS/s3:PutBucketCORS policy fails with 403
- https://github.com/ceph/ceph/pull/18444
- 02:57 AM rgw Backport #21634 (Resolved): luminous: s3:GetBucketLocation bucket policy fails with 403
- https://github.com/ceph/ceph/pull/18443
- 02:57 AM rgw Backport #21633 (Resolved): luminous: s3:GetBucketWebsite/PutBucketWebsite fails with 403
- https://github.com/ceph/ceph/pull/18445
- 02:57 AM rgw Backport #21632 (Resolved): jewel: remove region from "INSTALL CEPH OBJECT GATEWAY"
- https://github.com/ceph/ceph/pull/18303
- 02:57 AM rgw Backport #21631 (Resolved): luminous: remove region from "INSTALL CEPH OBJECT GATEWAY"
- https://github.com/ceph/ceph/pull/18865
- 02:57 AM rgw Backport #21630 (Rejected): kraken: remove region from "INSTALL CEPH OBJECT GATEWAY"
10/02/2017
- 01:52 PM Ceph Bug #21300 (Resolved): "ceph osd df" crashes ceph-mon if mgr is offline
- 01:51 PM rbd Bug #20426 (Resolved): some generic options can not be passed by rbd-nbd
- 01:51 PM rbd Bug #21247 (Resolved): [cls] metadata_list API function does not honor `max_return` parameter.
- 01:51 PM rbd Bug #21251 (Resolved): [test] various teuthology errors
- 01:50 PM RADOS Bug #20910 (Resolved): spurious MON_DOWN, apparently slow/laggy mon
- 01:50 PM RADOS Bug #21243 (Resolved): incorrect erasure-code space in command ceph df
- 01:50 PM rgw Bug #20906 (Resolved): multisite: FAILED assert(prev_iter != pos_to_prev.end()) in RGWMetaSyncSha...
- 01:50 PM rgw Bug #21349 (Resolved): rgw: data encryption sometimes fails to follow AWS settings
09/29/2017
- 07:12 PM Ceph Bug #21300 (Pending Backport): "ceph osd df" crashes ceph-mon if mgr is offline
- 03:00 PM RADOS Bug #21249 (Resolved): Client client.admin marked osd.2 out, after it was down for 1504627577 sec...
- 02:58 PM RADOS Bug #20944 (Resolved): OSD metadata 'backend_filestore_dev_node' is "unknown" even for simple dep...
09/28/2017
- 09:37 AM Ceph Bug #20706 (Resolved): ceph-disk can't activate-block Error: /dev/sdb2 is not a block device
09/25/2017
- 09:22 PM Ceph Backport #21548 (In Progress): luminous: ceph_manager: bad AssertionError: failed to recover befo...
- 09:21 PM Ceph Backport #21548 (Resolved): luminous: ceph_manager: bad AssertionError: failed to recover before ...
- https://github.com/ceph/ceph/pull/17951
- 09:22 PM mgr Backport #21549 (Resolved): luminous: the dashboard uses absolute links for filesystems and clients
- 09:21 PM mgr Backport #21547 (Resolved): luminous: ceph-mgr gets process called "exe" after respawn
- https://github.com/ceph/ceph/pull/18738
- 09:21 PM rgw Backport #21546 (Resolved): jewel: rgw file write error
- https://github.com/ceph/ceph/pull/18304
- 09:21 PM rgw Backport #21545 (Resolved): luminous: rgw file write error
- https://github.com/ceph/ceph/pull/18433
- 09:21 PM RADOS Backport #21544 (Resolved): luminous: mon osd feature checks for osdmap flags and require-osd-rel...
- https://github.com/ceph/ceph/pull/18364
- 09:21 PM RADOS Backport #21543 (Resolved): luminous: bluestore fsck took 224.778802 seconds to complete which ca...
- https://github.com/ceph/ceph/pull/18362
- 09:18 PM rgw Bug #21455: rgw file write error
- Matt, kraken is EoL - removing on the assumption it was added by mistake. Feel free to reinstate it if there is a par...
- 08:40 PM CephFS Backport #21540 (Resolved): luminous: whitelist additions
- 08:40 PM Ceph Revision 63ce5146 (ceph): Merge pull request #17945 from batrick/i21540
- luminous: qa whitelist fixes
Reviewed-by: Nathan Cutler <ncutler@suse.com> - 08:33 PM CephFS Bug #21463 (Resolved): qa: ignorable "MDS cache too large" warning
- 08:32 PM CephFS Backport #21472 (Resolved): luminous: qa: ignorable "MDS cache too large" warning
- 08:32 PM Ceph Revision 9f8d66eb (ceph): Merge pull request #17821 from smithfarm/wip-21472-luminous
- luminous: tests: kcephfs: ignorable MDS cache too large warning
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
... - 01:52 PM CephFS Bug #21510 (Resolved): qa: kcephfs: client-limits: whitelist "MDS cache too large"
- 01:52 PM CephFS Backport #21517 (Resolved): luminous: qa: kcephfs: client-limits: whitelist "MDS cache too large"
- 01:51 PM CephFS Bug #21509 (Resolved): qa: kcephfs: ignore warning on expected mds failover
- 01:51 PM CephFS Backport #21516 (Resolved): luminous: qa: kcephfs: ignore warning on expected mds failover
- 01:51 PM CephFS Bug #21508 (Resolved): qa: kcephfs: missing whitelist for evicted client
- 01:48 PM Ceph Backport #21522 (In Progress): jewel: ceph-disk omits "--runtime" when enabling ceph-osd@$ID.serv...
- 01:46 PM Ceph Bug #18305: ceph-osd systemd unit files incomplete
- > seems like it's no longer reproducible using current Jewel or Luminous (on Debian Stretch). still very noisy and a ...
- 01:43 PM CephFS Backport #21515 (Resolved): luminous: qa: kcephfs: missing whitelist for evicted client
- 01:43 PM Ceph Revision 35a23b86 (ceph): Merge pull request #17922 from batrick/kcephfs-backports
- luminous: qa: kcephfs whitelist fixes
Reviewed-by: Yan, Zheng <zyan@redhat.com>
09/22/2017
- 09:15 PM CephFS Backport #21519: jewel: qa: test_client_pin times out waiting for dentry release from kernel
- I'll be honest and say I don't like late merges that touch real code (not just tests) because they can introduce regr...
- 08:34 PM CephFS Backport #21526 (Closed): jewel: client: dual client segfault with racing ceph_shutdown
- https://github.com/ceph/ceph/pull/21153
- 08:34 PM CephFS Backport #21525 (Resolved): luminous: client: dual client segfault with racing ceph_shutdown
- https://github.com/ceph/ceph/pull/20082
- 08:34 PM mgr Backport #21524 (Resolved): luminous: DaemonState members accessed outside of locks
- https://github.com/ceph/ceph/pull/18675
- 08:33 PM Ceph Backport #21523 (Resolved): luminous: selinux denies getattr on lnk sysfs files
- https://github.com/ceph/ceph/pull/18650
- 08:33 PM Ceph Backport #21522 (Resolved): jewel: ceph-disk omits "--runtime" when enabling ceph-osd@$ID.service...
- https://github.com/ceph/ceph/pull/17942
- 08:31 PM CephFS Bug #20337 (Resolved): test_rebuild_simple_altpool triggers MDS assertion
- 08:31 PM CephFS Backport #21490 (Resolved): luminous: test_rebuild_simple_altpool triggers MDS assertion
- 08:31 PM CephFS Bug #21466 (Resolved): qa: fs.get_config on stopped MDS
- 08:30 PM CephFS Backport #21484 (Resolved): luminous: qa: fs.get_config on stopped MDS
- 08:23 PM Ceph Revision 750e67ca (ceph): Merge pull request #17892 from smithfarm/wip-p2p-s3-test
- jewel: tests: fix upgrade/jewel-x/point-to-point-x
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Yuri ... - 07:34 PM CephFS Bug #21252 (Resolved): mds: asok command error merged with partial Formatter output
- 07:34 PM CephFS Backport #21321 (Resolved): luminous: mds: asok command error merged with partial Formatter output
- 07:33 PM CephFS Bug #21414 (Resolved): client: Variable "onsafe" going out of scope leaks the storage it points to
- 07:33 PM CephFS Backport #21436 (Resolved): luminous: client: Variable "onsafe" going out of scope leaks the stor...
- 07:31 PM CephFS Bug #21381 (Resolved): test_filtered_df: assert 0.9 < ratio < 1.1
- 07:31 PM CephFS Backport #21437 (Resolved): luminous: test_filtered_df: assert 0.9 < ratio < 1.1
- 07:30 PM CephFS Bug #21275 (Resolved): test hang after mds evicts kclient
- 07:30 PM CephFS Backport #21473 (Resolved): luminous: test hang after mds evicts kclient
- 07:29 PM CephFS Bug #21462 (Resolved): qa: ignorable MDS_READ_ONLY warning
- 07:29 PM CephFS Backport #21464 (Resolved): luminous: qa: ignorable MDS_READ_ONLY warning
- 07:28 PM CephFS Bug #21071 (Resolved): qa: test_misc creates metadata pool with dummy object resulting in WRN: PO...
- 07:28 PM CephFS Backport #21449 (Resolved): luminous: qa: test_misc creates metadata pool with dummy object resul...
- 07:27 PM CephFS Backport #21486 (Resolved): luminous: qa: test_client_pin times out waiting for dentry release fr...
- 07:27 PM CephFS Bug #21421 (Resolved): MDS rank add/remove log messages say wrong number of ranks
- 07:26 PM CephFS Backport #21487 (Resolved): luminous: MDS rank add/remove log messages say wrong number of ranks
- 07:26 PM CephFS Backport #21488 (Resolved): luminous: qa: failures from pjd fstest
- 03:07 PM Ceph Revision 662168b5 (ceph): Merge pull request #17910 from smithfarm/wip-21499
- tests: point-to-point-x: upgrade client.1 to -x along with cluster nodes
Reviewed-by: Casey Bodley <cbodley@redhat.com>
09/21/2017
- 04:33 PM Ceph Bug #21494: selinux: Allow getattr on lnk sysfs files
- Dupe?
- 04:17 PM devops Bug #16097 (New): /usr/bin/rados depends on libradosstriper.so.1
- Reopening on the grounds that it may be feasible to implement a compile-time option that would enable/disable the cod...
- 01:54 PM Ceph Backport #21439 (In Progress): luminous: Performance: Slow OSD startup, heavy LevelDB activity
- 01:45 PM Ceph Backport #21372 (In Progress): luminous: Add export and remove ceph-objectstore-tool command option
- 01:44 PM CephFS Backport #21488 (In Progress): luminous: qa: failures from pjd fstest
- 03:34 AM CephFS Backport #21488 (Resolved): luminous: qa: failures from pjd fstest
- https://github.com/ceph/ceph/pull/17888
- 01:42 PM CephFS Backport #21487 (In Progress): luminous: MDS rank add/remove log messages say wrong number of ranks
- 03:34 AM CephFS Backport #21487 (Resolved): luminous: MDS rank add/remove log messages say wrong number of ranks
- https://github.com/ceph/ceph/pull/17887
- 01:40 PM CephFS Backport #21486 (In Progress): luminous: qa: test_client_pin times out waiting for dentry release...
- 03:34 AM CephFS Backport #21486 (Resolved): luminous: qa: test_client_pin times out waiting for dentry release fr...
- https://github.com/ceph/ceph/pull/17886
- 11:57 AM Ceph Backport #21485 (In Progress): jewel: the typo to the thread name
- 03:34 AM Ceph Backport #21485 (Resolved): jewel: the typo to the thread name
- https://github.com/ceph/ceph/pull/17883
- 08:38 AM CephFS Backport #21449 (In Progress): luminous: qa: test_misc creates metadata pool with dummy object re...
- 08:37 AM CephFS Backport #21437 (In Progress): luminous: test_filtered_df: assert 0.9 < ratio < 1.1
- 08:35 AM CephFS Backport #21436 (In Progress): luminous: client: Variable "onsafe" going out of scope leaks the s...
- 08:29 AM CephFS Backport #21359 (In Progress): luminous: racy is_mounted() checks in libcephfs
- 04:20 AM CephFS Feature #20752 (Fix Under Review): cap message flag which indicates if client still has pending c...
- *master PR*: https://github.com/ceph/ceph/pull/16778
- 04:18 AM CephFS Backport #21321 (In Progress): luminous: mds: asok command error merged with partial Formatter ou...
- 04:15 AM mgr Backport #21479 (In Progress): luminous: Services reported with blank hostname by mgr
- 04:13 AM mgr Backport #21452 (In Progress): luminous: prometheus module generates invalid output when counter ...
- 04:11 AM mgr Backport #21443 (In Progress): luminous: Prometheus crash when update
- 04:09 AM mgr Backport #21320 (In Progress): luminous: Quieten scary RuntimeError from restful module on startup
- 04:07 AM RADOS Backport #21465 (In Progress): luminous: OSD metadata 'backend_filestore_dev_node' is "unknown" e...
- 04:05 AM RADOS Backport #21438 (In Progress): luminous: Daemons(OSD, Mon...) exit abnormally at injectargs command
- 04:03 AM RADOS Backport #21343 (In Progress): luminous: DNS SRV default service name not used anymore
- 04:01 AM RADOS Backport #21307 (In Progress): luminous: Client client.admin marked osd.2 out, after it was down ...
- 03:59 AM rbd Backport #21441 (In Progress): luminous: [cli] mirror "getter" commands will fail if mirroring ha...
- 03:57 AM rbd Backport #21299 (In Progress): luminous: [rbd-mirror] asok hook names not updated when image is r...
- 03:54 AM rgw Backport #21451 (In Progress): luminous: rgw: lc process only schdule the first item of lc objects
- 03:53 AM rgw Backport #21448 (In Progress): luminous: rgw: string_view instance points to expired memory in Pr...
- 03:52 AM rgw Backport #21446 (In Progress): luminous: rgw:multisite: Get bucket location which is located in a...
- 03:50 AM rgw Backport #21444 (In Progress): luminous: rgw: setxattrs call leads to different mtimes for bucket...
- 03:43 AM CephFS Backport #21484 (In Progress): luminous: qa: fs.get_config on stopped MDS
- 03:34 AM CephFS Backport #21484 (Resolved): luminous: qa: fs.get_config on stopped MDS
- https://github.com/ceph/ceph/pull/17855
- 03:41 AM CephFS Backport #21490 (In Progress): luminous: test_rebuild_simple_altpool triggers MDS assertion
- 03:35 AM CephFS Backport #21490 (Resolved): luminous: test_rebuild_simple_altpool triggers MDS assertion
- https://github.com/ceph/ceph/pull/17855
- 03:34 AM CephFS Backport #21489 (Resolved): jewel: qa: failures from pjd fstest
- https://github.com/ceph/ceph/pull/21152
- 03:32 AM CephFS Bug #20337: test_rebuild_simple_altpool triggers MDS assertion
- @Patrick: So both https://github.com/ceph/ceph/pull/16305 and https://github.com/ceph/ceph/pull/17849 need to be back...
09/20/2017
- 08:37 PM RADOS Bug #21428: luminous: osd: does not request latest map from mon
- Fix:
* master https://github.com/ceph/ceph/pull/17828
* luminous https://github.com/ceph/ceph/pull/17829 - 02:03 PM Ceph Bug #21432 (Pending Backport): the typo to the thread name
- 01:42 PM Ceph Backport #21478 (In Progress): jewel: systemd: Add explicit Before=ceph.target
- 01:39 PM Ceph Backport #21478 (Resolved): jewel: systemd: Add explicit Before=ceph.target
- https://github.com/ceph/ceph/pull/17841
- 01:40 PM mgr Backport #21479 (Resolved): luminous: Services reported with blank hostname by mgr
- https://github.com/ceph/ceph/pull/17869
- 01:38 PM Ceph Bug #21477: systemd: Add explicit Before=ceph.target
- *master PR*: https://github.com/ceph/ceph/pull/15835
We have seen the "cyclic dependency" error appear after apply... - 01:37 PM Ceph Bug #21477 (Resolved): systemd: Add explicit Before=ceph.target
- The PartOf= and WantedBy= directives in the various systemd
unit files and targets create the following logical hier... - 03:07 AM CephFS Backport #21473 (In Progress): luminous: test hang after mds evicts kclient
- 02:43 AM CephFS Backport #21473 (Resolved): luminous: test hang after mds evicts kclient
- https://github.com/ceph/ceph/pull/17822
- 03:04 AM Ceph Backport #19224 (New): jewel: osd ops (sent and?) arrive at osd out of order
- @David - could you take this one?
- 03:03 AM Ceph Backport #19140 (New): jewel: osdc/Objecter: If osd full, it should pause read op which w/ rworde...
- @David - could you take this one?
- 02:58 AM rbd Bug #21181 (Resolved): "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master
- 02:58 AM rbd Backport #21346 (Resolved): jewel: "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel...
- 02:57 AM rbd Backport #21345 (Resolved): luminous: "'rbd/import_export.sh'" broken in upgrade:luminous-x:paral...
- 02:57 AM rbd Backport #21344 (Resolved): kraken: "'rbd/import_export.sh'" broken in upgrade:luminous-x:paralle...
- 02:47 AM CephFS Backport #21472 (In Progress): luminous: qa: ignorable "MDS cache too large" warning
- 02:43 AM CephFS Backport #21472 (Resolved): luminous: qa: ignorable "MDS cache too large" warning
- https://github.com/ceph/ceph/pull/17821
- 02:43 AM CephFS Bug #21463 (Pending Backport): qa: ignorable "MDS cache too large" warning
- Looks like it needs a backport of 71f0066f6ec
Also available in: Atom