Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2022-03-03T16:12:00ZCeph
Redmine bluestore - Bug #54465 (Resolved): BlueFS broken sync compaction modehttps://tracker.ceph.com/issues/544652022-03-03T16:12:00ZAdam Kupczyk
<p>BlueFS fine grain locking refactor block sync compaction mode.</p>
<p>The problem is off-by-1 in seq which leads to drop of all but first _replay log entries.</p>
<p>022-03-03T07:55:39.765+0000 7ffff7fda840 20 bluefs _replay 0x0: op_dir_create sharding<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 20 bluefs _replay 0x0: op_dir_link sharding/def to 21<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 20 bluefs _replay 0x0: op_jump_seq 1025<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 10 bluefs _read h 0x555557c46400 0x1000~1000 from file(ino 1 size 0x1000 mtime 0.000000 allocated 410000 alloc_commit 410000 extents [1:0x1540000~410000])<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 20 bluefs _read left 0xff000 len 0x1000<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 20 bluefs _read got 4096<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 10 bluefs _replay 0x1000: stop: seq 1025 != expected 1026<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 10 bluefs _replay log file size was 0x1000<br />2022-03-03T07:55:39.765+0000 7ffff7fda840 10 bluefs _replay done</p>
<p>The default mode is async mode.</p> bluestore - Bug #54248 (Resolved): BlueFS improperly tracks vselector sizes in _flush_special()https://tracker.ceph.com/issues/542482022-02-10T15:27:23ZAdam Kupczyk
<p>This problem is introduced in fine grain locking refactor.</p> RADOS - Bug #53685 (New): Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed.https://tracker.ceph.com/issues/536852021-12-21T11:22:25ZAdam Kupczyk
<p>Test "rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind}"</p>
<p><a class="external" href="http://pulpito.front.sepia.ceph.com/yuriw-2021-12-17_19:17:02-rados-wip-yuri3-testing-2021-12-17-0825-distro-default-smithi/6569207/">http://pulpito.front.sepia.ceph.com/yuriw-2021-12-17_19:17:02-rados-wip-yuri3-testing-2021-12-17-0825-distro-default-smithi/6569207/</a></p>
<p>Caused:<br />Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed.</p>
<p>2021-12-18T00:08:44.222 INFO:tasks.ceph.osd.5.smithi151.stderr:ceph-osd: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-9707-ga73102d4/rpm/el8/BUILD/ceph-17.0.0-9707-ga73102d4/src/messages/MOSDRepOp.h:127: virtual void MOSDRepOp::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed.<br />2021-12-18T00:08:44.253 INFO:tasks.ceph.osd.3.smithi120.stderr:2021-12-18T00:08:44.137+0000 ea6a700 -1 received signal: Hangup from /usr/bin/python3 /bin/daemon-helper term env OPENSSL_ia32cap=~0x1000000000000000 valgrind --trace-children=no --child-silent-after-fork=yes --soname-synonyms=somalloc=*tcmalloc* --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/osd.3.log --time-stamp=yes --vgdb=yes --exit-on-first-error=yes --error-exitcode=42 --tool=memcheck ceph-osd -f --cluster ceph -i 3 (PID: 35068) UID: 0<br />2021-12-18T00:08:44.282 INFO:tasks.ceph.osd.2.smithi120.stderr:2021-12-18T00:08:44.136+0000 ee6a700 -1 received signal: Hangup from /usr/bin/python3 /bin/daemon-helper term env OPENSSL_ia32cap=~0x1000000000000000 valgrind --trace-children=no --child-silent-after-fork=yes --soname-synonyms=somalloc=*tcmalloc* --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/osd.2.log --time-stamp=yes --vgdb=yes --exit-on-first-error=yes --error-exitcode=42 --tool=memcheck ceph-osd -f --cluster ceph -i 2 (PID: 35067) UID: 0<br />2021-12-18T00:08:44.284 INFO:tasks.ceph.osd.5.smithi151.stderr:*** Caught signal (Aborted) *<strong><br />2021-12-18T00:08:44.284 INFO:tasks.ceph.osd.5.smithi151.stderr: in thread 2de5d700 thread_name:tp_osd_tp<br />2021-12-18T00:08:44.418 INFO:tasks.ceph.osd.5.smithi151.stderr: ceph version 17.0.0-9707-ga73102d4 (a73102d4a8bb9378f707185ba2d1a9e105c3b138) quincy (dev)<br />2021-12-18T00:08:44.418 INFO:tasks.ceph.osd.5.smithi151.stderr: 1: /lib64/libpthread.so.0(+0x12b20) [0x6825b20]<br />2021-12-18T00:08:44.419 INFO:tasks.ceph.osd.5.smithi151.stderr: 2: gsignal()<br />2021-12-18T00:08:44.419 INFO:tasks.ceph.osd.5.smithi151.stderr: 3: abort()<br />2021-12-18T00:08:44.419 INFO:tasks.ceph.osd.5.smithi151.stderr: 4: /lib64/libc.so.6(+0x21c89) [0x7a52c89]<br />2021-12-18T00:08:44.419 INFO:tasks.ceph.osd.5.smithi151.stderr: 5: /lib64/libc.so.6(+0x2fa76) [0x7a60a76]<br />2021-12-18T00:08:44.420 INFO:tasks.ceph.osd.5.smithi151.stderr: 6: (MOSDRepOp::encode_payload(unsigned long)+0x2d0) [0xbb0960]<br />2021-12-18T00:08:44.420 INFO:tasks.ceph.osd.5.smithi151.stderr: 7: (Message::encode(unsigned long, int, bool)+0x2e) [0x101dade]<br />2021-12-18T00:08:44.420 INFO:tasks.ceph.osd.5.smithi151.stderr: 8: (ProtocolV2::prepare_send_message(unsigned long, Message</strong>)+0x44) [0x12a3fb4]<br />2021-12-18T00:08:44.420 INFO:tasks.ceph.osd.5.smithi151.stderr: 9: (ProtocolV2::send_message(Message*)+0x3ae) [0x12a460e]<br />2021-12-18T00:08:44.421 INFO:tasks.ceph.osd.5.smithi151.stderr: 10: (AsyncConnection::send_message(Message*)+0x53e) [0x1280cbe]<br />2021-12-18T00:08:44.421 INFO:tasks.ceph.osd.5.smithi151.stderr: 11: (OSDService::send_message_osd_cluster(int, Message*, unsigned int)+0xf5) [0x7cf3c5]<br />2021-12-18T00:08:44.421 INFO:tasks.ceph.osd.5.smithi151.stderr: 12: (ReplicatedBackend::issue_op(hobject_t const&, eversion_t const&, unsigned long, osd_reqid_t, eversion_t, eversion_t, hobject_t, hobject_t, std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> > const&, std::optional<pg_hit_set_history_t>&, ReplicatedBackend::InProgressOp*, ceph::os::Transaction&)+0x557) [0xb99427]<br />2021-12-18T00:08:44.422 INFO:tasks.ceph.osd.5.smithi151.stderr: 13: (ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> >&&, std::optional<pg_hit_set_history_t>&, Context*, unsigned long, osd_reqid_t, boost::intrusive_ptr<OpRequest>)+0xa94) [0xb9bc24]<br />2021-12-18T00:08:44.422 INFO:tasks.ceph.osd.5.smithi151.stderr: 14: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0xc80) [0x8ed9a0]<br />2021-12-18T00:08:44.422 INFO:tasks.ceph.osd.5.smithi151.stderr: 15: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x1097) [0x94ff97]<br />2021-12-18T00:08:44.422 INFO:tasks.ceph.osd.5.smithi151.stderr: 16: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x39be) [0x95455e]<br />2021-12-18T00:08:44.423 INFO:tasks.ceph.osd.5.smithi151.stderr: 17: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xe2e) [0x95b52e]<br />2021-12-18T00:08:44.423 INFO:tasks.ceph.osd.5.smithi151.stderr: 18: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x332) [0x7de142]<br />2021-12-18T00:08:44.423 INFO:tasks.ceph.osd.5.smithi151.stderr: 19: (ceph::osd::scheduler::PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x6f) [0xa9fb8f]<br />2021-12-18T00:08:44.423 INFO:tasks.ceph.osd.5.smithi151.stderr: 20: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xac8) [0x7fc088]<br />2021-12-18T00:08:44.423 INFO:tasks.ceph.osd.5.smithi151.stderr: 21: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5c4) [0xeefeb4]<br />2021-12-18T00:08:44.424 INFO:tasks.ceph.osd.5.smithi151.stderr: 22: (ShardedThreadPool::WorkThreadSharded::entry()+0x14) [0xef1254]<br />2021-12-18T00:08:44.424 INFO:tasks.ceph.osd.5.smithi151.stderr: 23: /lib64/libpthread.so.0(+0x814a) [0x681b14a]<br />2021-12-18T00:08:44.424 INFO:tasks.ceph.osd.5.smithi151.stderr: 24: clone()</p> bluestore - Bug #53261 (Duplicate): pacific: OMAP upgrade to PER-PG format result in skipped firs...https://tracker.ceph.com/issues/532612021-11-13T10:39:33ZAdam Kupczyk
<p>This is a regression introduced by fix to omap upgrade: <a class="external" href="https://github.com/ceph/ceph/pull/43687">https://github.com/ceph/ceph/pull/43687</a><br />The problem is that we always skipped first omap entry. <br />This works fine with objects having omap header key.<br />For objects without header key we skipped first actual omap key.</p> bluestore - Bug #53260 (Resolved): OMAP upgrade to PER-PG format result in skipped first key.https://tracker.ceph.com/issues/532602021-11-13T10:36:27ZAdam Kupczyk
<p>This is a regression introduced by fix to omap upgrade: <a class="external" href="https://github.com/ceph/ceph/pull/43687">https://github.com/ceph/ceph/pull/43687</a><br />The problem is that we always skipped first omap entry. <br />This works fine with objects having omap header key.<br />For objects without header key we skipped first actual omap key.</p> bluestore - Bug #53129 (Resolved): BlueFS truncate() and poweroff can create corrupted fileshttps://tracker.ceph.com/issues/531292021-11-02T15:54:51ZAdam Kupczyk
<p>It is possible to create condition in which a BlueFS contains file that is corrupted.<br />It can happen when BlueFS replay log is on device A and we just wrote to device B and truncated file.</p>
<p>Scenario:<br />1) write to file h1 on SLOW device<br />2) flush h1 (initiate data transfer, but no fdatasync yet)<br />3) truncate h1<br />4) write to file h2 on DB<br />5) fsync h2 (forces replay log to be written, after fdatasync to DB)<br />6) poweroff</p>
<p>In result we have file h1 that is properly declared in replay log, but with uninitialized content.<br />This happens even with<br /><a class="external" href="https://tracker.ceph.com/issues/50965">https://tracker.ceph.com/issues/50965</a><br />applied.</p>
<p>I think it is regression introduced by above fix.</p> bluestore - Bug #50965 (Resolved): In poweroff conditions BlueFS can create corrupted fileshttps://tracker.ceph.com/issues/509652021-05-25T07:12:44ZAdam Kupczyk
<p>It is possible to create condition in which a BlueFS contains file that is corrupted.<br />It can happen when BlueFS replay log is on device A and we just wrote to device B.</p>
<p>Scenario:<br />1) write to file h1 on SLOW device<br />2) flush h1 (and trigger h1 mark to be added to bluefs replay log, but no fdatasync yet)<br />3) write to file h2 on DB<br />4) fsync h2 (forces replay log to be written, after fdatasync to DB)<br />5) poweroff</p>
<p>In result we have file h1 that is properly declared in replay log, but with uninitialized content.</p> bluestore - Bug #46027 (Resolved): bufferlist c_str() sometimes clears assignment to mempoolhttps://tracker.ceph.com/issues/460272020-06-16T08:02:19ZAdam Kupczyk
<p>Sometimes c_str() needs to rebuild underlying buffer::raw.<br />It that case original assignment to mempool is lost.</p> bluestore - Bug #45903 (Resolved): BlueFS replay log grows without endhttps://tracker.ceph.com/issues/459032020-06-05T08:29:52ZAdam Kupczyk
<p>If data is slowly pouring to RocksDB WAL and new files are not created, BlueFS replay log can grow to the size that it no longer can be replayed.</p> bluestore - Bug #43538 (Resolved): BlueFS volume selector asserthttps://tracker.ceph.com/issues/435382020-01-09T17:20:34ZAdam Kupczyk
<p>I have been doing test that operate only on rocksdb.</p>
<p>During intensive omap writes, sometimes following happens:<br /> -3> 2020-01-09T06:33:11.782-0500 7fb3b9b19700 4 rocksdb: [db/compaction_job.cc:1332] [default] [JOB 1126] Generated table <a class="issue tracker-2 status-10 priority-4 priority-default closed" title="Feature: osd: optionally tolerate and repair EIO on deep scrub reads (Duplicate)" href="https://tracker.ceph.com/issues/4214">#4214</a>: 32235 keys, 2258207 bytes<br /> -2> 2020-01-09T06:33:11.782-0500 7fb3b9b19700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578569591783536, "cf_name": "default", "job": 1126, "event": "table_file_creation", "file_number": 4214, "file_size": 2258207, "table_properties": {"data_size": 2155594, "index_size": 21047, "filter_size": 80709, "raw_key_size": 896982, "raw_average_key_size": 27, "raw_value_size": 1704768, "raw_average_value_size": 52, "num_data_blocks": 533, "num_entries": 32235, "filter_policy_name": "rocksdb.BuiltinBloomFilter"}}<br /> -1> 2020-01-09T06:33:11.793-0500 7fb3b4b0f700 -1 /work/adam/ceph-4/src/os/bluestore/BlueStore.h: In function 'virtual void RocksDBBlueFSVolumeSelector::sub_usage(void*, const bluefs_fnode_t&)' thread 7fb3b4b0f700 time 2020-01-09T06:33:11.784907-0500<br />/work/adam/ceph-4/src/os/bluestore/BlueStore.h: 3677: FAILED ceph_assert(cur >= p.length)</p>
<pre><code>ceph version 15.0.0-8514-gf01523d (f01523d0021e786f660e1d2823c38088ea7fe49c) octopus (dev)<br /> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x56309434c3cc]<br /> 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x56309434c59a]<br /> 3: (RocksDBBlueFSVolumeSelector::sub_usage(void*, bluefs_fnode_t const&)+0x2a4) [0x5630948f9f64]<br /> 4: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0xf25) [0x563094973be5]<br /> 5: (BlueFS::_flush(BlueFS::FileWriter*, bool)+0x10b) [0x563094974dab]<br /> 6: (BlueFS::_flush_and_sync_log(std::unique_lock&lt;std::mutex&gt;&, unsigned long, unsigned long)+0x8a3) [0x563094975aa3]<br /> 7: (BlueFS::_fsync(BlueFS::FileWriter*, std::unique_lock&lt;std::mutex&gt;&)+0x92) [0x563094978032]<br /> 8: (BlueRocksWritableFile::Sync()+0x63) [0x563094999e63]<br /> 9: (rocksdb::WritableFileWriter::SyncInternal(bool)+0x321) [0x563094f79421]<br /> 10: (rocksdb::WritableFileWriter::Sync(bool)+0x88) [0x563094f7a9c8]<br /> 11: (rocksdb::DBImpl::WriteToWAL(rocksdb::WriteThread::WriteGroup const&, rocksdb::log::Writer*, unsigned long*, bool, bool, unsigned long)+0x304) [0x563094e0b614]<br /> 12: (rocksdb::DBImpl::WriteImpl(rocksdb::WriteOptions const&, rocksdb::WriteBatch*, rocksdb::WriteCallback*, unsigned long*, unsigned long, bool, unsigned long*, unsigned long, rocksdb::PreReleaseCallback*)+0x257e) [0x563094e1404e]<br /> 13: (rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, rocksdb::WriteBatch*)+0x21) [0x563094e14531]<br /> 14: (RocksDBStore::submit_common(rocksdb::WriteOptions&, std::shared_ptr&lt;KeyValueDB::TransactionImpl&gt;)+0x81) [0x563094dc8331]<br /> 15: (RocksDBStore::submit_transaction_sync(std::shared_ptr&lt;KeyValueDB::TransactionImpl&gt;)+0x97) [0x563094dc8da7]<br /> 16: (BlueStore_DB_Hash::submit_transaction_sync(std::shared_ptr&lt;KeyValueDB::TransactionImpl&gt;)+0x4b) [0x5630948fbdbb]<br /> 17: (BlueStore::_kv_sync_thread()+0x2279) [0x5630948d6c99]<br /> 18: (BlueStore::KVSyncThread::entry()+0xd) [0x5630948fb63d]<br /> 19: (()+0x7e25) [0x7fb3c8b3ce25]<br /> 20: (clone()+0x6d) [0x7fb3c7ba434d]</code></pre> bluestore - Bug #43353 (Can't reproduce): BlueFS files read and written at the same timehttps://tracker.ceph.com/issues/433532019-12-17T15:42:01ZAdam Kupczyk
<p>Bluestore heavy loaded with rocksdb fails to compact.<br />It fails on check against simultaneous flushing (writing) and having file ready to read from.</p>
<p>Assert:<br />/work/adam/ceph-4/src/os/bluestore/BlueFS.cc: 2532: FAILED ceph_assert(h->file->num_readers.load() == 0)</p>
<p>Callstack:<br /><a class="issue tracker-2 status-6 priority-3 priority-lowest closed" title="Feature: libceph could use a backward-compatible-to function (Rejected)" href="https://tracker.ceph.com/issues/6">#6</a> 0x000055bff8ea90fb in ceph::__ceph_assert_fail (assertion=<optimized out>, file=<optimized out>, line=<optimized out>, <br /> func=<optimized out>) at /work/adam/ceph-4/src/common/assert.cc:73<br /><a class="issue tracker-6 status-3 priority-3 priority-lowest closed" title="Documentation: Document Monitor Commands (Resolved)" href="https://tracker.ceph.com/issues/7">#7</a> 0x000055bff8ea927a in ceph::__ceph_assert_fail (ctx=...) at /work/adam/ceph-4/src/common/assert.cc:78<br /><a class="issue tracker-3 status-5 priority-4 priority-default closed" title="Support: Document differences from S3 (Closed)" href="https://tracker.ceph.com/issues/8">#8</a> 0x000055bff94d18e1 in BlueFS::_flush_range (this=this@entry=0x55c004590800, h=h@entry=0x55c0049fadc0, <br />---Type <return> to continue, or q <return> to quit---<br /> offset=offset@entry=741911, length=length@entry=192) at /work/adam/ceph-4/src/os/bluestore/BlueFS.cc:2532<br /><a class="issue tracker-2 status-8 priority-3 priority-lowest closed" title="Feature: Access unimported data (Won't Fix)" href="https://tracker.ceph.com/issues/9">#9</a> 0x000055bff94d1a4b in BlueFS::_flush (this=this@entry=0x55c004590800, h=h@entry=0x55c0049fadc0, force=force@entry=true)<br /> at /work/adam/ceph-4/src/os/bluestore/BlueFS.cc:2782<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: osd: Replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/10">#10</a> 0x000055bff94d4c07 in BlueFS::_fsync (this=this@entry=0x55c004590800, h=h@entry=0x55c0049fadc0, l=...)<br /> at /work/adam/ceph-4/src/os/bluestore/BlueFS.cc:2831<br /><a class="issue tracker-4 status-3 priority-3 priority-lowest closed" title="Cleanup: mds: replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/11">#11</a> 0x000055bff94f49c3 in fsync (h=0x55c0049fadc0, this=0x55c004590800) at /work/adam/ceph-4/src/os/bluestore/BlueFS.h:557<br /><a class="issue tracker-2 status-3 priority-3 priority-lowest closed" title="Feature: uclient: Make cap handling smarter (Resolved)" href="https://tracker.ceph.com/issues/12">#12</a> BlueRocksWritableFile::Sync (this=<optimized out>) at /work/adam/ceph-4/src/os/bluestore/BlueRocksEnv.cc:220<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed parent" title="Feature: uclient: Make readdir use the cache (Resolved)" href="https://tracker.ceph.com/issues/13">#13</a> 0x000055bff9ad4151 in rocksdb::WritableFileWriter::SyncInternal (this=this@entry=0x55c004d8eaa0, <br /> use_fsync=use_fsync@entry=false) at /work/adam/ceph-4/src/rocksdb/util/file_reader_writer.cc:426<br /><a class="issue tracker-1 status-10 priority-4 priority-default closed" title="Bug: osd: pg split breaks if not all osds are up (Duplicate)" href="https://tracker.ceph.com/issues/14">#14</a> 0x000055bff9ad56f8 in rocksdb::WritableFileWriter::Sync (this=this@entry=0x55c004d8eaa0, use_fsync=<optimized out>)<br /> at /work/adam/ceph-4/src/rocksdb/util/file_reader_writer.cc:395<br /><a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mds rejoin: invented dirfrags (MDCache.cc:3469) (Resolved)" href="https://tracker.ceph.com/issues/15">#15</a> 0x000055bff9adb0d7 in rocksdb::SyncManifest (env=<optimized out>, db_options=0x55c0039b6a08, file=0x55c004d8eaa0)<br /> at /work/adam/ceph-4/src/rocksdb/util/filename.cc:407<br /><a class="issue tracker-1 status-3 priority-5 priority-high3 closed" title="Bug: mds restart vs dbench (Resolved)" href="https://tracker.ceph.com/issues/16">#16</a> 0x000055bff9a244ad in rocksdb::VersionSet::ProcessManifestWrites (this=this@entry=0x55c003989900, writers=..., <br /> mu=0x55c0039b6c60, db_directory=db_directory@entry=0x55c005fc3060, new_descriptor_log=<optimized out>, <br /> new_descriptor_log@entry=false, new_cf_options=0x0) at /work/adam/ceph-4/src/rocksdb/db/version_set.cc:3090<br /><a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: rm -r failure (Rejected)" href="https://tracker.ceph.com/issues/17">#17</a> 0x000055bff9a25538 in rocksdb::VersionSet::LogAndApply (this=0x55c003989900, column_family_datas=..., <br /> mutable_cf_options_list=..., edit_lists=..., mu=<optimized out>, db_directory=0x55c005fc3060, new_descriptor_log=false, <br /> new_cf_options=0x0) at /work/adam/ceph-4/src/rocksdb/db/version_set.cc:3310<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: reconnect fixups (Resolved)" href="https://tracker.ceph.com/issues/18">#18</a> 0x000055bff99618ec in rocksdb::VersionSet::LogAndApply (this=0x55c003989900, column_family_data=<optimized out>, <br /> mutable_cf_options=..., edit=<optimized out>, mu=0x55c0039b6c60, db_directory=0x55c005fc3060, new_descriptor_log=false, <br /> column_family_options=0x0) at /work/adam/ceph-4/src/rocksdb/db/version_set.h:774<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: rbd (Resolved)" href="https://tracker.ceph.com/issues/19">#19</a> 0x000055bff9979da0 in rocksdb::DBImpl::BackgroundCompaction (this=this@entry=0x55c0039b6800, <br /> made_progress=made_progress@entry=0x7f2a74e35e86, job_context=job_context@entry=0x7f2a74e35ea0, <br /> log_buffer=log_buffer@entry=0x7f2a74e36070, prepicked_compaction=prepicked_compaction@entry=0x0, <br /> thread_pri=<optimized out>) at /work/adam/ceph-4/src/rocksdb/db/db_impl_compaction_flush.cc:2542<br /><a class="issue tracker-2 status-3 priority-5 priority-high3 closed" title="Feature: client: recover from a killed session (w/ blacklist) (Resolved)" href="https://tracker.ceph.com/issues/20">#20</a> 0x000055bff997fc06 in rocksdb::DBImpl::BackgroundCallCompaction (this=this@entry=0x55c0039b6800, <br /> prepicked_compaction=prepicked_compaction@entry=0x0, bg_thread_pri=bg_thread_pri@entry=rocksdb::Env::LOW)<br /> at /work/adam/ceph-4/src/rocksdb/db/db_impl_compaction_flush.cc:2192<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: optionally use libatomic for atomic_t (Resolved)" href="https://tracker.ceph.com/issues/21">#21</a> 0x000055bff99800ba in rocksdb::DBImpl::BGWorkCompaction (arg=<optimized out>)<br /> at /work/adam/ceph-4/src/rocksdb/db/db_impl_compaction_flush.cc:1972<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: BUG at fs/ceph/caps.c:253 (Closed)" href="https://tracker.ceph.com/issues/22">#22</a> 0x000055bff9b5d9b4 in operator() (this=0x7f2a74e369f0)<br /> at /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/std_function.h:706<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: fcntl/flock advisory lock support (Resolved)" href="https://tracker.ceph.com/issues/23">#23</a> rocksdb::ThreadPoolImpl::Impl::BGThread (this=this@entry=0x55c0045bad20, thread_id=thread_id@entry=0)<br /> at /work/adam/ceph-4/src/rocksdb/util/threadpool_imp.cc:265<br /><a class="issue tracker-2 status-1 priority-4 priority-default" title="Feature: mdsc: preallocate reply msgs (New)" href="https://tracker.ceph.com/issues/24">#24</a> 0x000055bff9b5db4d in rocksdb::ThreadPoolImpl::Impl::BGThreadWrapper (arg=0x55c004d11330)<br /> at /work/adam/ceph-4/src/rocksdb/util/threadpool_imp.cc:306<br /><a class="issue tracker-2 status-1 priority-3 priority-lowest" title="Feature: mdsc: mempool for cap writeback? (New)" href="https://tracker.ceph.com/issues/25">#25</a> 0x000055bff9c15f9f in execute_native_thread_routine ()<br /><a class="issue tracker-2 status-6 priority-3 priority-lowest closed" title="Feature: statlite (Rejected)" href="https://tracker.ceph.com/issues/26">#26</a> 0x00007f2a83e5ce25 in start_thread (arg=0x7f2a74e39700) at pthread_create.c:308<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: ACLs (Resolved)" href="https://tracker.ceph.com/issues/27">#27</a> 0x00007f2a82ec434d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113</p> bluestore - Bug #38176 (Won't Fix): Unable to recover from ENOSPC in BlueFS, WALhttps://tracker.ceph.com/issues/381762019-02-05T11:28:01ZAdam Kupczyk
<p>It is possible to insert so much OMAP data into objects that it will overflow storage and cause ENOSPC when rocksdb tries to flush WAL, and needs to extend bluefs space.<br />This is fatal error as there is really less space then required.</p>
<p>This bug happened as result of testing of bug <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: Unable to recover from ENOSPC in BlueFS (Resolved)" href="https://tracker.ceph.com/issues/36268">#36268</a>.<br />How to replicate this bug:<br /><a class="external" href="https://drive.google.com/file/d/1KJAmuz2YGoL17UYEmd6VUegBMABk6gmG">https://drive.google.com/file/d/1KJAmuz2YGoL17UYEmd6VUegBMABk6gmG</a></p> sepia - Support #21261 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/212612017-09-06T12:44:04ZAdam Kupczyk
<p>I already had access to sepia lab, but I lost VPN 'secret' due to laptop failure.</p>
<p>1) Do you just need VPN access or will you also be running teuthology jobs?<br />BOTH</p>
<p>2) Desired Username: <br />akupczyk</p>
<p>3) Alternate e-mail address(es) we can reach you at: <br /><a class="email" href="mailto:akupczyk@redhat.com">akupczyk@redhat.com</a></p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?<br />I'm ceph team member. People in my team: Matt Benjamin, Yehuda Sadeh, Ian Colle</p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+CtnvyVRSkIvD15O/q7/vQmD4oE8q7Mo6oT3kqTXaEaQ7iDayC2biUC0PC4OztPlNJxPUcKK5DfWw4F79OKeheqUbZRhsCC3Ge2JGoXHmsf0kBYdioMGiAxm+f8M/v1KESSSxvEnU7o+oi6VtFchh4Dl/WZ54rqZc/oQrlGNTHulFAuyoIAlKkCe3N0GdHN54PB+26QbLXvOuqpWmjVFbcjgRC62mkZl3LuCBhviXS4Rl5/ZE0QIHubCwV5XVrsqvscLS64bi06W8RN2vL56Vn3N9TGV4d2dlbE1PaBBESoTQ7Q0iUN32OAZLo7nF/1fiPY3Xc8r35juya5k+KR9z adam@TP50</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>adam@TP50 C0YuBT9bYaNhdDmjbF56xg 5d298b33b9dbaef364b037561aa5c5de374405bb8afead5280db5b212506ea58</pre></p> sepia - Support #21012 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/210122017-08-16T13:37:04ZAdam Kupczyk
<p>1) Do you just need VPN access or will you also be running teuthology jobs?<br />Running teutology jobs</p>
<p>2) Desired Username: <br />akupczyk</p>
<p>3) Alternate e-mail address(es) we can reach you at: <br /><a class="email" href="mailto:akupczyk@redhat.com">akupczyk@redhat.com</a></p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?<br />Matt Benjamin, Yehuda Sadeh, Ian Colle</p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>AAAAB3NzaC1yc2EAAAADAQABAAABAQC+CtnvyVRSkIvD15O/q7/vQmD4oE8q7Mo6oT3kqTXaEaQ7iDayC2biUC0PC4OztPlNJxPUcKK5DfWw4F79OKeheqUbZRhsCC3Ge2JGoXHmsf0kBYdioMGiAxm+f8M/v1KESSSxvEnU7o+oi6VtFchh4Dl/WZ54rqZc/oQrlGNTHulFAuyoIAlKkCe3N0GdHN54PB+26QbLXvOuqpWmjVFbcjgRC62mkZl3LuCBhviXS4Rl5/ZE0QIHubCwV5XVrsqvscLS64bi06W8RN2vL56Vn3N9TGV4d2dlbE1PaBBESoTQ7Q0iUN32OAZLo7nF/1fiPY3Xc8r35juya5k+KR9z</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>akupczyk@TP50 Qt+xhD+ctLmpH6EL97Ao5Q f565d09017cf4579bf1b7fb4c5428d8627c388bb9750038990a834f1ab541970
</pre></p> sepia - Support #19374 (Resolved): Requesting lab access for scheduling jobshttps://tracker.ceph.com/issues/193742017-03-24T15:45:00ZAdam Kupczyk
<p>1. Access type: scheduling jobs.<br />2. akupczyk<br />3. <a class="email" href="mailto:akupczyk@mirantis.com">akupczyk@mirantis.com</a><br />4. <a class="email" href="mailto:cbodley@redhat.com">cbodley@redhat.com</a><br />5. <br /><pre>
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrgAewklX1BCq0ztYmDETNqjeADANBfISB9HOCXN4O9bIhyJ0ZPS1dwsIjyb8n66laZ/suFOnJnRoQ/JybZvxUqyBmzXZUpZf0zdj0LtmgtGABGWMcXAJkeMlLJbpYTxm4df1F178ZCzIkKllncNhhg2fb/oT0cnXAwUZgiPpznpvh6g3r7fRIH3A3tR5LJO/XUPkkj2N1rct9Llui03CJLSdrmgswmGrjF1xqyfTqUqNB7Lv/7+XNTDbk3TEoxow8XjFbGPOuOJ3sgQjzzrhIgSbk9QoL+0rsruSfugfjZslnxUo+Bzr9D7SXTcUT1hnsMHXai+vjV9I6QwbUREtH akupczyk@mirantis.com
</pre><br />6. sepia/new client<br /><pre>
akupczyk@mirantis.com CYOXIMiUyhgsCwtLjiZtrw f3b9c21f4b3b5f198521db4a613ee797b533d606da4760eeb4acd6f528120215
</pre></p>