Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2019-03-13T20:10:10Z
Ceph
Redmine
rgw - Documentation #38730 (Resolved): Added library/package for Golang
https://tracker.ceph.com/issues/38730
2019-03-13T20:10:10Z
Irek Fasikhov
malmyzh@gmail.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/26937">https://github.com/ceph/ceph/pull/26937</a></p>
rgw - Bug #38722 (Resolved): rgw: fix RGWDeleteMultiObj::verify_permission
https://tracker.ceph.com/issues/38722
2019-03-13T15:11:20Z
Irek Fasikhov
malmyzh@gmail.com
<p>So.<br />Set Policy on bucket</p>
<pre><code class="json syntaxhl"><span class="CodeRay">{
<span class="key"><span class="delimiter">"</span><span class="content">Version</span><span class="delimiter">"</span></span>: <span class="string"><span class="delimiter">"</span><span class="content">2012-10-17</span><span class="delimiter">"</span></span>,
<span class="key"><span class="delimiter">"</span><span class="content">Statement</span><span class="delimiter">"</span></span>: [
{
<span class="key"><span class="delimiter">"</span><span class="content">Sid</span><span class="delimiter">"</span></span>:<span class="string"><span class="delimiter">"</span><span class="content">AddPerm</span><span class="delimiter">"</span></span>,
<span class="key"><span class="delimiter">"</span><span class="content">Effect</span><span class="delimiter">"</span></span>: <span class="string"><span class="delimiter">"</span><span class="content">Allow</span><span class="delimiter">"</span></span>,
<span class="key"><span class="delimiter">"</span><span class="content">Principal</span><span class="delimiter">"</span></span>: {<span class="key"><span class="delimiter">"</span><span class="content">AWS</span><span class="delimiter">"</span></span>: [
<span class="string"><span class="delimiter">"</span><span class="content">arn:aws:iam::dev:user/infas</span><span class="delimiter">"</span></span>
]},
<span class="key"><span class="delimiter">"</span><span class="content">Action</span><span class="delimiter">"</span></span>: [
<span class="string"><span class="delimiter">"</span><span class="content">s3:Put*</span><span class="delimiter">"</span></span>,
<span class="string"><span class="delimiter">"</span><span class="content">s3:List*</span><span class="delimiter">"</span></span>
],
<span class="key"><span class="delimiter">"</span><span class="content">Resource</span><span class="delimiter">"</span></span>: [
<span class="string"><span class="delimiter">"</span><span class="content">arn:aws:s3:::sb1/*</span><span class="delimiter">"</span></span>,
<span class="string"><span class="delimiter">"</span><span class="content">arn:aws:s3:::sb1</span><span class="delimiter">"</span></span>
]
}
]
}
</span></code></pre>
<p>Put objects<br /><pre>
kataklysm@infas:~/tmp> ~/bin/s3cmd-2.0.2/s3cmd put winlogbeat-test -c ~/.s3cfg1 s3://sb1/
upload: 'winlogbeat-test' -> 's3://sb1/winlogbeat-test' [1 of 1]
14778761 of 14778761 100% in 0s 16.60 MB/s done
kataklysm@infas:~/tmp> ~/bin/s3cmd-2.0.2/s3cmd put winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22\ 03\:01\:05.933494181\ +0300\ MSK\ m\=+6.245462125 -c ~/.s3cfg1 s3://sb1/
upload: 'winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22 03:01:05.933494181 +0300 MSK m=+6.245462125' -> 's3://sb1/winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22 03:01:05.933494181 +0300 MSK m=+6.245462125' [1 of 1]
1165202 of 1165202 100% in 0s 8.72 MB/s done
</pre></p>
<p>List Bucket<br /><pre>
kataklysm@infas:~/tmp> ~/bin/s3cmd-2.0.2/s3cmd -c ~/.s3cfg1 ls -l s3://sb1/
2019-03-13 13:58 1165202 3f244bc9e225c4fab09ac5d9f8506126 STANDARD s3://sb1/winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22 03:01:05.933494181 +0300 MSK m=+6.245462125
2019-03-13 13:57 14778761 a3200c53eae46e7c8f0dd7f95add5b81 STANDARD s3://sb1/winlogbeat-test
</pre></p>
<p>Trying to delete objects...Wow<br /><pre>
kataklysm@infas:~/tmp> ~/bin/s3cmd-2.0.2/s3cmd -c ~/.s3cfg1 rm -rf s3://sb1/
delete: 's3://sb1/winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22 03:01:05.933494181 +0300 MSK m=+6.245462125'
delete: 's3://sb1/winlogbeat-test'
kataklysm@infas:~/tmp> ~/bin/s3cmd-2.0.2/s3cmd -c ~/.s3cfg1 rm -rf s3://sb1/
delete: 's3://sb1/winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22 03:01:05.933494181 +0300 MSK m=+6.245462125'
delete: 's3://sb1/winlogbeat-test'
kataklysm@infas:~/tmp> ~/bin/s3cmd-2.0.2/s3cmd -c ~/.s3cfg1 rm -rf s3://sb1/
delete: 's3://sb1/winlogbeat-6.4.2-2018.11.21_20790.json.gzip_2018-11-22 03:01:05.933494181 +0300 MSK m=+6.245462125'
delete: 's3://sb1/winlogbeat-test'
</pre></p>
<p>In fact, the user does not have access rights. You must receive a response 403</p>
Ceph - Bug #11757 (Duplicate): Failure to load the osdmap
https://tracker.ceph.com/issues/11757
2015-05-26T09:32:53Z
Irek Fasikhov
malmyzh@gmail.com
<p>Compiling from source the latest active version Hammer.</p>
<pre>
[root@ceph01p24 ~]# ceph -v
ceph version 0.94.1-116-g63832d4 (63832d4039889b6b704b88b86eaba4aadcfceb2e)
</pre>
<pre>
-5> 2015-05-26 09:42:54.997851 7f0fbfa34880 10 _load_class version success
-4> 2015-05-26 09:42:54.997862 7f0fbfa34880 20 osd.25 0 get_map 17735 - loading and decoding 0x4589200
-3> 2015-05-26 09:42:54.997869 7f0fbfa34880 15 filestore(/var/lib/ceph/osd/ceph-25) read meta/4e928679/osdmap.17735/0//-1 0~0
-2> 2015-05-26 09:42:54.997890 7f0fbfa34880 10 filestore(/var/lib/ceph/osd/ceph-25) error opening file /var/lib/ceph/osd/ceph-25/current/meta/DIR_9/DIR_7/osdmap.17735__0_4E928679__none with flags=2: (2) No such file or directory
-1> 2015-05-26 09:42:54.997899 7f0fbfa34880 10 filestore(/var/lib/ceph/osd/ceph-25) FileStore::read(meta/4e928679/osdmap.17735/0//-1) open error: (2) No such file or directory
0> 2015-05-26 09:42:54.999254 7f0fbfa34880 -1 osd/OSD.h: In function 'OSDMapRef OSDService::get_map(epoch_t)' thread 7f0fbfa34880 time 2015-05-26 09:42:54.997908
osd/OSD.h: 716: FAILED assert(ret)
ceph version 0.94.1-116-g63832d4 (63832d4039889b6b704b88b86eaba4aadcfceb2e)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0xbc4e15]
2: (OSDService::get_map(unsigned int)+0x3f) [0x6ffa9f]
3: (OSD::init()+0x6b7) [0x6b8e17]
4: (main()+0x27f3) [0x643b63]
5: (__libc_start_main()+0xf5) [0x7f0fbcdd2af5]
6: /usr/bin/ceph-osd() [0x65cdc9]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
<p>Configuration:<br /><pre>
[osd]
osd journal size = 10000
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = rw,noatime,inode64,logbsize=256k,allocsize=1m
filestore xattr use omap = true
osd scrub load threshold = 2
osd recovery op priority = 2
osd max backfills = 1
osd recovery max active = 1
osd recovery threads = 1
osd crush update on start = false
osd recovery delay start = 5
osd snap trim sleep = 0.5
osd disk thread ioprio class = idle
osd disk thread ioprio priority = 7
debug_objecter = 20/20
debug_ms = 20/20
debug_filestore = 20/20
debug_osd = 20/20
debug_journal = 20/20
</pre></p>
<p>System:<br /><pre>
[root@ceph01p24 ~]# uname -a
Linux ceph01p24.bank-hlynov.ru 3.10.0-229.4.2.el7.x86_64 #1 SMP Wed May 13 10:06:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@ceph01p24 ~]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
</pre></p>
Ceph - Bug #11511 (Resolved): FAILED assert(cursor.data_offset <= oi.size)
https://tracker.ceph.com/issues/11511
2015-04-30T13:08:33Z
Irek Fasikhov
malmyzh@gmail.com
<p>Hi.<br />When Rebalancing data error occurred OSD.</p>
Ceph - Bug #10832 (Duplicate): FAILED assert(soid < scrubber.start || soid >= scrubber.end)
https://tracker.ceph.com/issues/10832
2015-02-11T07:34:07Z
Irek Fasikhov
malmyzh@gmail.com
<pre>
osd/ReplicatedPG.cc: 5318: FAILED assert(soid < scrubber.start || soid >= scrubber.end)
ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7)
1: (ReplicatedPG::finish_ctx(ReplicatedPG::OpContext*, int, bool)+0x32f5) [0x889275]
2: (ReplicatedPG::finish_promote(int, std::tr1::shared_ptr<OpRequest>, ReplicatedPG::CopyResults*, std::tr1::shared_ptr<ObjectContext>)+0x110f) [0x8903ef]
3: (PromoteCallback::finish(boost::tuples::tuple<int, ReplicatedPG::CopyResults*, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type
, boost::tuples::null_type, boost::tuples::null_type>)+0x78) [0x8e29b8]
4: (GenContext<boost::tuples::tuple<int, ReplicatedPG::CopyResults*, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tupl
es::null_type, boost::tuples::null_type> >::complete(boost::tuples::tuple<int, ReplicatedPG::CopyResults*, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type,
boost::tuples::null_type, boost::tuples::null_type, boost::tuples::null_type>)+0x15) [0x8b34f5]
5: (ReplicatedPG::process_copy_chunk(hobject_t, unsigned long, int)+0x747) [0x885407]
6: (C_Copyfrom::finish(int)+0xb7) [0x8e2777]
7: (Context::complete(int)+0x9) [0x667209]
8: (Finisher::finisher_thread_entry()+0x1d8) [0x9ed148]
9: (()+0x79d1) [0x7f17d39879d1]
10: (clone()+0x6d) [0x7f17d29008fd]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
Ceph - Bug #10778 (Can't reproduce): Rollback+ReplicationPG -> Segmentation fault
https://tracker.ceph.com/issues/10778
2015-02-06T06:50:25Z
Irek Fasikhov
malmyzh@gmail.com
<p>Morning found that some OSD dropped out of Tier Cache Pool. Maybe it's a coincidence, but at this point was rollback.</p>
<pre>
2015-02-05 23:23:18.231723 7fd747ff1700 -1 *** Caught signal (Segmentation fault) **
in thread 7fd747ff1700
ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7)
1: /usr/bin/ceph-osd() [0x9bde51]
2: (()+0xf710) [0x7fd766f97710]
3: (std::_Rb_tree_decrement(std::_Rb_tree_node_base*)+0xa) [0x7fd7666c1eca]
4: (ReplicatedPG::make_writeable(ReplicatedPG::OpContext*)+0x14c) [0x87cd5c]
5: (ReplicatedPG::prepare_transaction(ReplicatedPG::OpContext*)+0x1db) [0x89d29b]
6: (ReplicatedPG::execute_ctx(ReplicatedPG::OpContext*)+0xcd4) [0x89e0f4]
7: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)+0x2ca5) [0x8a2a55]
8: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x5b1) [0x832251]
9: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x37c) [0x61344c]
10: (OSD::OpWQ::_process(boost::intrusive_ptr<PG>, ThreadPool::TPHandle&)+0x63d) [0x6472ad]
11: (ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process(void*, ThreadPool::TPHandle&)+0xae) [0x67dcde]
12: (ThreadPool::worker(ThreadPool::WorkThread*)+0x551) [0xa2a181]
13: (ThreadPool::WorkThread::entry()+0x10) [0xa2d260]
14: (()+0x79d1) [0x7fd766f8f9d1]
15: (clone()+0x6d) [0x7fd765f088fd]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
</pre>
Ceph - Bug #9765 (Duplicate): CachePool flush -> OSD Failed
https://tracker.ceph.com/issues/9765
2014-10-14T01:19:06Z
Irek Fasikhov
malmyzh@gmail.com
<p>Hi,All.</p>
<p>I encountered a problem flushing the data before deleting CachePool.<br />My crushmap:<br /><pre>
[root@ct01 ceph]# ceph osd tree
# id weight type name up/down reweight
-8 3 root cache.bank-hlynov.ru
-5 1 host cache1
13 1 osd.13 up 1
-6 1 host cache2
31 1 osd.31 up 1
-7 1 host cache3
30 1 osd.30 up 1
-1 109.2 root bank-hlynov.ru
-2 36.4 host ct01
2 3.64 osd.2 up 1
4 3.64 osd.4 up 1
5 3.64 osd.5 up 1
6 3.64 osd.6 up 1
7 3.64 osd.7 up 1
8 3.64 osd.8 up 1
9 3.64 osd.9 up 1
10 3.64 osd.10 up 1
11 3.64 osd.11 up 1
0 3.64 osd.0 up 1
-3 36.4 host ct2
12 3.64 osd.12 up 1
14 3.64 osd.14 up 1
15 3.64 osd.15 up 1
16 3.64 osd.16 up 1
17 3.64 osd.17 up 1
18 3.64 osd.18 up 1
19 3.64 osd.19 up 1
20 3.64 osd.20 up 1
21 3.64 osd.21 up 1
32 3.64 osd.32 up 1
-4 36.4 host ct3
1 3.64 osd.1 up 1
3 3.64 osd.3 up 1
22 3.64 osd.22 up 1
23 3.64 osd.23 up 1
24 3.64 osd.24 up 1
25 3.64 osd.25 up 1
26 3.64 osd.26 up 1
27 3.64 osd.27 up 1
28 3.64 osd.28 up 1
29 3.64 osd.29 up 1
</pre></p>
<p>My actions:<br />1. ceph osd tier add rbd cache<br />2. ceph osd tier cache-mode cache writeback<br />3. ceph osd tier set-overlay rbd cache<br />4. sets hit_set_count, hit_set_period and etc.<br />5. Writes data from a virtual machine:<br /> rados -p cache ls | wc -l<br /> 28685<br />6. ceph osd tier cache-mode cache forward</p>
<p>7.<br /><pre>rados -p cache cache-flush-evict-all
rbd_data.53b752ae8944a.000000000000bebe
rbd_data.53b752ae8944a.000000000000bd11
rbd_data.53b752ae8944a.000000000000c286
rbd_data.53b752ae8944a.000000000000c3f7
rbd_data.45132ae8944a.0000000000001a2c
rbd_data.c1362ae8944a.0000000000017a4c
2014-10-14 11:39:18.183180 7fac98c34700 0 -- 192.168.50.1:0/1032885 >> 192.168.50.1:6807/32153 pipe(0x7fac90012f30 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7fac900131c0).fault
[root@ct01 ceph]# ceph osd stat
osdmap e25538: 33 osds: 32 up, 33 in
rados -p cache ls | wc -l
28679
</pre></p>
<p>8. OSD dead :(.<br /><pre>
[root@ct01 ceph]# ceph osd tree
# id weight type name up/down reweight
-8 3 root cache.bank-hlynov.ru
-5 1 host cache1
*13 1 osd.13 down 0*
-6 1 host cache2
31 1 osd.31 up 1
-7 1 host cache3
30 1 osd.30 up 1
-1 109.2 root bank-hlynov.ru
-2 36.4 host ct01
2 3.64 osd.2 up 1
4 3.64 osd.4 up 1
5 3.64 osd.5 up 1
6 3.64 osd.6 up 1
7 3.64 osd.7 up 1
8 3.64 osd.8 up 1
9 3.64 osd.9 up 1
10 3.64 osd.10 up 1
11 3.64 osd.11 up 1
0 3.64 osd.0 up 1
-3 36.4 host ct2
12 3.64 osd.12 up 1
14 3.64 osd.14 up 1
15 3.64 osd.15 up 1
16 3.64 osd.16 up 1
17 3.64 osd.17 up 1
18 3.64 osd.18 up 1
19 3.64 osd.19 up 1
20 3.64 osd.20 up 1
21 3.64 osd.21 up 1
32 3.64 osd.32 up 1
-4 36.4 host ct3
1 3.64 osd.1 up 1
3 3.64 osd.3 up 1
22 3.64 osd.22 up 1
23 3.64 osd.23 up 1
24 3.64 osd.24 up 1
25 3.64 osd.25 up 1
26 3.64 osd.26 up 1
27 3.64 osd.27 up 1
28 3.64 osd.28 up 1
29 3.64 osd.29 up 1
</pre></p>
<p>OS: CentOS 6.5<br />Kernel: 2.6.32-431.el6.x86_64<br />Ceph --version: ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)</p>
rbd - Bug #9602 (Closed): rbd export -> nc ->rbd import = memory leak
https://tracker.ceph.com/issues/9602
2014-09-26T05:26:39Z
Irek Fasikhov
malmyzh@gmail.com
<p>I see a memory leak when importing raw devi?e.</p>
<p>Export Scheme:<br />[rbd@rbdbackup ~]$ rbd --no-progress -n client.rbdbackup -k /etc/ceph/big.keyring -c /etc/ceph/big.conf export rbdtest/vm-111-disk-1 - | nc 10.43.255.252 12345</p>
<p>[root@ct2 ~]# nc -l 12345 | rbd import --no-progress --image-format 2 - rbd/vm-111-disk-1</p>
<p>This is the same problem with ssh.<br />Memory usage, see the screenshots.</p>
<p>OS: CentOS 6.5<br />Kernel: 2.6.32-431.el6.x86_64<br />Ceph --version: ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)</p>