Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2011-04-05T10:48:52Z
Ceph
Redmine
rgw - Feature #984 (New): rgw: user logging API
https://tracker.ceph.com/issues/984
2011-04-05T10:48:52Z
Anonymous
<p>from s3-tests:</p>
<pre>
def test_logging_toggle():
bucket = get_new_bucket()
log_bucket = s3.main.create_bucket(bucket.name + '-log')
log_bucket.set_as_logging_target()
bucket.enable_logging(target_bucket=log_bucket, target_prefix=bucket.name)
bucket.disable_logging()
</pre>
<pre>
$ S3TEST_CONF=tv.conf ./virtualenv/bin/nosetests test_s3:test_logging_toggle
E
======================================================================
ERROR: test_s3.test_logging_toggle
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/tv/src/s3-tests.git/virtualenv/lib/python2.6/site-packages/nose/case.py", line 187, in runTest
self.test(*self.arg)
File "/home/tv/src/s3-tests.git/test_s3.py", line 557, in test_logging_toggle
log_bucket.set_as_logging_target()
File "/home/tv/src/s3-tests.git/virtualenv/lib/python2.6/site-packages/boto/s3/bucket.py", line 749, in set_as_logging_target
self.set_acl(policy, headers=headers)
File "/home/tv/src/s3-tests.git/virtualenv/lib/python2.6/site-packages/boto/s3/bucket.py", line 586, in set_acl
headers, version_id)
File "/home/tv/src/s3-tests.git/virtualenv/lib/python2.6/site-packages/boto/s3/bucket.py", line 581, in set_xml_acl
response.status, response.reason, body)
S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code></Error>
-------------------- >> begin captured logging << --------------------
boto: DEBUG: path=/
boto: DEBUG: auth_path=/
boto: DEBUG: Canonical: GET
Tue, 05 Apr 2011 17:48:29 GMT
/
boto: DEBUG: Method: GET
boto: DEBUG: Path: /
boto: DEBUG: Data:
boto: DEBUG: Headers: {'Date': 'Tue, 05 Apr 2011 17:48:29 GMT', 'Content-Length': '0', 'Authorization': 'AWS TKKZ1DX83O7ZCTWHE0YD:ozpyUFTu46cLPMAzqxNmtpqxnTA=', 'User-Agent': 'Boto/2.0b4 (linux2)'}
boto: DEBUG: Host: localhost:7280
boto: DEBUG: establishing HTTP connection
boto: DEBUG: path=/
boto: DEBUG: auth_path=/
boto: DEBUG: Canonical: GET
Tue, 05 Apr 2011 17:48:29 GMT
/
boto: DEBUG: Method: GET
boto: DEBUG: Path: /
boto: DEBUG: Data:
boto: DEBUG: Headers: {'Date': 'Tue, 05 Apr 2011 17:48:29 GMT', 'Content-Length': '0', 'Authorization': 'AWS O54XVCC9MQ9Q72TWP5Y1:Zm89VSN6M8qDhkW4gamK6PXYolU=', 'User-Agent': 'Boto/2.0b4 (linux2)'}
boto: DEBUG: Host: localhost:7280
boto: DEBUG: establishing HTTP connection
boto: DEBUG: path=/test-tv-xtepuoh62trj8qevlj2bm-1/
boto: DEBUG: auth_path=/test-tv-xtepuoh62trj8qevlj2bm-1/
boto: DEBUG: Canonical: PUT
Tue, 05 Apr 2011 17:48:29 GMT
/test-tv-xtepuoh62trj8qevlj2bm-1/
boto: DEBUG: Method: PUT
boto: DEBUG: Path: /test-tv-xtepuoh62trj8qevlj2bm-1/
boto: DEBUG: Data:
boto: DEBUG: Headers: {'Date': 'Tue, 05 Apr 2011 17:48:29 GMT', 'Content-Length': '0', 'Authorization': 'AWS O54XVCC9MQ9Q72TWP5Y1:P1Xnh+4Ffqh4sg5icVUt4EttGNQ=', 'User-Agent': 'Boto/2.0b4 (linux2)'}
boto: DEBUG: Host: localhost:7280
boto: DEBUG: path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: auth_path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: Canonical: PUT
Tue, 05 Apr 2011 17:48:30 GMT
/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: Method: PUT
boto: DEBUG: Path: /test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: Data:
boto: DEBUG: Headers: {'Date': 'Tue, 05 Apr 2011 17:48:30 GMT', 'Content-Length': '0', 'Authorization': 'AWS O54XVCC9MQ9Q72TWP5Y1:I0o9IElpN9ioU4Tpd5r40YUZUqE=', 'User-Agent': 'Boto/2.0b4 (linux2)'}
boto: DEBUG: Host: localhost:7280
boto: DEBUG: path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: auth_path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: auth_path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: Canonical: GET
Tue, 05 Apr 2011 17:48:32 GMT
/test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: Method: GET
boto: DEBUG: Path: /test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: Data:
boto: DEBUG: Headers: {'Date': 'Tue, 05 Apr 2011 17:48:32 GMT', 'Content-Length': '0', 'Authorization': 'AWS O54XVCC9MQ9Q72TWP5Y1:xcsbtvdMiWpb7GXQnv8R25+SMk0=', 'User-Agent': 'Boto/2.0b4 (linux2)'}
boto: DEBUG: Host: localhost:7280
boto: DEBUG: path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: auth_path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/
boto: DEBUG: path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: auth_path=/test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: Canonical: PUT
Tue, 05 Apr 2011 17:48:33 GMT
/test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: Method: PUT
boto: DEBUG: Path: /test-tv-xtepuoh62trj8qevlj2bm-1-log/?acl
boto: DEBUG: Data: <AccessControlPolicy><Owner><ID>O54XVCC9MQ9Q72TWP5Y1</ID><DisplayName>Mr. Foo</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>O54XVCC9MQ9Q72TWP5Y1</ID><DisplayName>Mr. Foo</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI></Grantee><Permission>WRITE</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI></Grantee><Permission>READ_ACP</Permission></Grant></AccessControlList></AccessControlPolicy>
boto: DEBUG: Headers: {'Date': 'Tue, 05 Apr 2011 17:48:33 GMT', 'Content-Length': '760', 'Authorization': 'AWS O54XVCC9MQ9Q72TWP5Y1:JvJi9jNX5qlDGDrYuXlR6t+zJCM=', 'User-Agent': 'Boto/2.0b4 (linux2)'}
boto: DEBUG: Host: localhost:7280
--------------------- >> end captured logging << ---------------------
----------------------------------------------------------------------
Ran 1 test in 8.280s
FAILED (errors=1)
</pre>
RADOS - Feature #628 (New): crushtool: better error messages when parsing a crushmap.txt
https://tracker.ceph.com/issues/628
2010-12-02T15:27:36Z
Colin McCabe
colinm@hq.newdream.net
<p>There is a more advanced error handling API for spirit described at:<br /><a class="external" href="http://www.boost.org/doc/libs/1_41_0/libs/spirit/classic/doc/error_handling.html">http://www.boost.org/doc/libs/1_41_0/libs/spirit/classic/doc/error_handling.html</a></p>
<p>Using this, we could give the user more information about parse errors than just a line number.</p>
CephFS - Feature #601 (New): mds: order directory commits after rename
https://tracker.ceph.com/issues/601
2010-11-22T12:37:50Z
Sage Weil
sage@newdream.net
<p>When we rename something between directories, we should try to commit the target directory <em>before</em> the source directory. That way, if all goes to hell, we have at worst 2 links to the file instead of 0.</p>
<p>This probably means adding some xlist<CDir*> commit_after list to the CDirs and make commit wait until the others have committed first.</p>
phprados - Feature #445 (New): Session handler
https://tracker.ceph.com/issues/445
2010-09-29T12:00:09Z
Wido den Hollander
wido@42on.com
<p>Add a session handler, so we can use RADOS as storage for our PHP sessions.</p>
<pre>session.save_handler = rados
session.save_path = phpsession</pre>
<p>Here <strong>phpsession</strong> is a RADOS pool, where every session will be a different object.</p>
phprados - Feature #424 (New): Stream wrappers
https://tracker.ceph.com/issues/424
2010-09-23T08:44:10Z
Wido den Hollander
wido@42on.com
<p>Implement streamwrappers for easy access to RADOS objects through file_get_contents('rados://pool/objname'), file_put_contents(..), fopen(), etc, etc.</p>
<p>More information: <a class="external" href="http://php.net/manual/en/book.stream.php">http://php.net/manual/en/book.stream.php</a></p>
Linux kernel client - Feature #387 (New): expose directory subtree partition/replication/fragment...
https://tracker.ceph.com/issues/387
2010-08-28T10:48:19Z
Sage Weil
sage@newdream.net
RADOS - Cleanup #311 (New): osd: remove read(len=0) full object behavior
https://tracker.ceph.com/issues/311
2010-07-26T15:23:44Z
Sage Weil
sage@newdream.net
<p>..,after the objecter doesn't need it (see <a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: objecter: limit in-flight ops and/or bytes written (Resolved)" href="https://tracker.ceph.com/issues/303">#303</a>)</p>
Ceph - Cleanup #299 (New): catch std::bad_alloc and die with helpful error in log on ENOMEM
https://tracker.ceph.com/issues/299
2010-07-22T13:28:44Z
Sage Weil
sage@newdream.net
CephFS - Feature #266 (New): mount.ceph: specify secret via name=foo and keyring=bar
https://tracker.ceph.com/issues/266
2010-07-08T08:59:52Z
Sage Weil
sage@newdream.net
<p>It can run cauthtool -p keyringfile to extract the secret.</p>
Linux kernel client - Feature #206 (New): make a 'soft' mode
https://tracker.ceph.com/issues/206
2010-06-16T11:56:09Z
Sage Weil
sage@newdream.net
<p>On Wed, 16 Jun 2010, Peter Niemayer wrote:</p>
<blockquote>
<p>Hi,</p>
<p>trying to "umount" a formerly mounted ceph filesystem that has become<br />unavailable (osd crashed, then msd/mon were shut down using /etc/init.d/ceph<br />stop) results in "umount" hanging forever in<br />"D" state.</p>
<p>Strangely, "umount -f" started from another terminal reports<br />the ceph filesystem as not being mounted anymore, which is consistent<br />with what the mount-table says.</p>
<p>The kernel keeps emitting the following messages from time to time:</p>
<blockquote>
<p>Jun 16 17:25:29 gitega kernel: ceph: tid 211912 timed out on osd0, will<br />reset osd<br />Jun 16 17:25:35 gitega kernel: ceph: mon0 10.166.166.1:6789 connection<br />failed<br />Jun 16 17:26:15 gitega last message repeated 4 times</p>
</blockquote>
<p>I would have expected the "umount" to terminate at least after some generous<br />timeout.</p>
<p>Ceph should probably support something like the "soft,intr" options<br />of NFS, because if the only supported way of mounting is one where<br />a client is more or less stuck-until-reboot when the service fails,<br />many potential test-configurations involving Ceph are way too dangerous<br />to try...</p>
</blockquote>
<p>Yeah, being able to force it to shut down when servers are unresponsive is <br />definitely the intent. 'umount -f' should work. It sounds like the <br />problem is related to the initial 'umount' (which doesn't time out) <br />followed by 'umount -f'.</p>
<p>I'm hesitant to add a blanket umount timeout, as that could prevent proper <br />writeout of cached data/metadata in some cases. So I think the goal <br />should be that if a normal umount hangs for some reason, you should be <br />able to intervene to add the 'force' if things don't go well.</p>
Linux kernel client - Feature #119 (New): avoid looping connect/retry errors on console
https://tracker.ceph.com/issues/119
2010-05-10T16:44:48Z
Sage Weil
sage@newdream.net
<p>we should try to avoid filling up logs with stuff like this:</p>
<pre>
[ 599.400000] ceph: tid 19823 timed out on osd0, will reset osd
[ 599.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 609.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 619.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 629.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 639.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 649.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 659.400000] ceph: tid 19823 timed out on osd0, will reset osd
[ 659.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 669.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 679.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 689.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 695.040000] ceph: mds0 hung
[ 699.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 709.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 719.400000] ceph: tid 19823 timed out on osd0, will reset osd
[ 719.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 729.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 739.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 749.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 759.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 769.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 779.400000] ceph: tid 19823 timed out on osd0, will reset osd
[ 779.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 789.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 799.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 809.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 819.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 829.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 839.400000] ceph: tid 19823 timed out on osd0, will reset osd
[ 839.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 849.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 859.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 869.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 879.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 889.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 899.400000] ceph: tid 19823 timed out on osd0, will reset osd
[ 899.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 909.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 919.400000] ceph: mon1 10.0.1.252:6790 connection failed
[ 929.400000] ceph: mon0 10.0.1.252:6789 connection failed
[ 939.400000] ceph: mon2 10.0.1.252:6791 connection failed
[ 949.400000] ceph: mon1 10.0.1.252:6790 connection failed
</pre>
CephFS - Bug #90 (New): mds: don't sync log on every clientreplay request
https://tracker.ceph.com/issues/90
2010-05-03T21:19:46Z
Sage Weil
sage@newdream.net
CephFS - Feature #83 (New): mds: rename over old files should flush data or revert to old contents?
https://tracker.ceph.com/issues/83
2010-05-03T21:14:23Z
Sage Weil
sage@newdream.net
<p>write to foo.conf.tmp<br />close<br />rename foo.conf.tmp to foo.conf<br /><crash before flushing new file content></p>
<p>foo.conf now 0 bytes. :(</p>
Linux kernel client - Feature #25 (New): mdsc: mempool for cap writeback?
https://tracker.ceph.com/issues/25
2010-04-13T09:46:55Z
Sage Weil
sage@newdream.net
Linux kernel client - Feature #24 (New): mdsc: preallocate reply msgs
https://tracker.ceph.com/issues/24
2010-04-13T09:46:34Z
Sage Weil
sage@newdream.net
<p>We should preallocate space for replies to our MDS messages.</p>