Project

General

Profile

Actions

Bug #40410

open

ceph pg query Segmentation fault in 12.2.10

Added by qingbo han almost 5 years ago. Updated over 4 years ago.

Status:
New
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
nautilus, mimic, luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I used ceph pg 16.7ff query in luminous-12.2.10,it always Segmentation fault.
I gdb this command,the stack as follows,I think it's maybe a bug:

#0 0x00002b2a6b7d3263 in OSDMap::_pg_to_raw_osds(pg_pool_t const&, pg_t, std::vector<int, std::allocator<int> >, unsigned int) const ()
from /usr/lib64/ceph/libceph-common.so.0
#1 0x00002b2a6b7d50b6 in OSDMap::_pg_to_up_acting_osds(pg_t const&, std::vector<int, std::allocator<int> >, int, std::vector<int, std::allocator<int> >, int, bool) const () from /usr/lib64/ceph/libceph-common.so.0
#2 0x00002b2a6b0d0bfd in Objecter::_calc_target(Objecter::op_target_t*, Connection*, bool) () from /lib64/librados.so.2
#3 0x00002b2a6b0df953 in Objecter::_calc_command_target(Objecter::CommandOp*, ceph::shunique_lock<boost::shared_mutex>&) () from /lib64/librados.so.2
#4 0x00002b2a6b0f47c9 in Objecter::submit_command(Objecter::CommandOp*, unsigned long*) () from /lib64/librados.so.2
#5 0x00002b2a6b0b0c2e in librados::RadosClient::pg_command(pg_t, std::vector<std::string, std::allocator<std::string> >&, ceph::buffer::list const&, ceph::buffer::list*, std::string*) () from /lib64/librados.so.2
#6 0x00002b2a6b07b0fb in rados_pg_command () from /lib64/librados.so.2
#7 0x00002b2a6ada7bbb in __pyx_pw_5rados_5Rados_57pg_command () from /usr/lib64/python2.7/site-packages/rados.so
#8 0x00002b2a622d720a in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#9 0x00002b2a622d903d in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#10 0x00002b2a622d653c in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#11 0x00002b2a622d66bd in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#12 0x00002b2a622d903d in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#13 0x00002b2a62262978 in function_call () from /lib64/libpython2.7.so.1.0
#14 0x00002b2a6223da63 in PyObject_Call () from /lib64/libpython2.7.so.1.0
#15 0x00002b2a6224ca55 in instancemethod_call () from /lib64/libpython2.7.so.1.0
#16 0x00002b2a6223da63 in PyObject_Call () from /lib64/libpython2.7.so.1.0
#17 0x00002b2a622cf8f7 in PyEval_CallObjectWithKeywords () from /lib64/libpython2.7.so.1.0
#18 0x00002b2a62307822 in t_bootstrap () from /lib64/libpython2.7.so.1.0
#19 0x00002b2a625c5dd5 in start_thread () from /lib64/libpthread.so.0
#20 0x00002b2a62fe0ead in clone () from /lib64/libc.so.6


Files

ceph-report.zip (47.3 KB) ceph-report.zip qingbo han, 06/24/2019 07:05 AM
leaky.zip (738 KB) leaky.zip valgrind trace result qingbo han, 06/25/2019 07:19 AM
crushmap.zip (4.86 KB) crushmap.zip qingbo han, 06/26/2019 06:40 AM
Actions #1

Updated by Brad Hubbard almost 5 years ago

  • Project changed from rgw to RADOS
  • Assignee set to Brad Hubbard
  • Source set to Community (user)
  • Affected Versions v12.2.10 added
  • ceph-qa-suite rados added

Could you provide details of your OS and upload a debug log with debug_osd=20 and a coredump? You can use http://docs.ceph.com/docs/master/man/8/ceph-post-file/ if the files are large and post the ID here.

Actions #2

Updated by qingbo han almost 5 years ago

Brad Hubbard wrote:

Could you provide details of your OS and upload a debug log with debug_osd=20 and a coredump? You can use http://docs.ceph.com/docs/master/man/8/ceph-post-file/ if the files are large and post the ID here.

My os is centos 7.4.1708,kernel:3.10.0-693.el7.x86_64。 Coredump and log had upload,id is abe76619-c290-4279-88a0-c310bf404ad7

Actions #3

Updated by qingbo han almost 5 years ago

qingbo han wrote:

Brad Hubbard wrote:

Could you provide details of your OS and upload a debug log with debug_osd=20 and a coredump? You can use http://docs.ceph.com/docs/master/man/8/ceph-post-file/ if the files are large and post the ID here.

My os is centos 7.4.1708,kernel:3.10.0-693.el7.x86_64。 Coredump and log had upload,id is abe76619-c290-4279-88a0-c310bf404ad7

The command Coredump is : ceph pg 6.7f3 query

Actions #4

Updated by Brad Hubbard almost 5 years ago

Thanks for that. Could you attach the output of "ceph report" please?

Actions #5

Updated by qingbo han almost 5 years ago

The output of "ceph report" is in the attachment

Actions #6

Updated by Brad Hubbard almost 5 years ago

could you please install the ceph-debuginfo and valgrind packages and then run the following command?

# valgrind --trace-children=yes --show-reachable=yes --track-origins=yes --read-var-info=yes --read-inline-info=yes --tool=memcheck --leak-check=full \
  --num-callers=50 -v --log-file=leaky.log /usr/bin/python2.7 /usr/bin/ceph pg 6.7f3 query
Actions #7

Updated by qingbo han almost 5 years ago

Brad Hubbard wrote:

could you please install the ceph-debuginfo and valgrind packages and then run the following command?

[...]

leaky.log is in the attachment

Actions #8

Updated by Brad Hubbard almost 5 years ago

Hello Han,

It's still not clear to me exactly what is going on here. There is some sort of invalid memory access occurring but the specifics are difficult to pinpoint. Can you confirm whether this is happening on all clients or only one? Could you also dump your crushmap and upload it please?

# ceph osd getcrushmap -o crushmap
Actions #9

Updated by qingbo han almost 5 years ago

Brad Hubbard wrote:

Hello Han,

It's still not clear to me exactly what is going on here. There is some sort of invalid memory access occurring but the specifics are difficult to pinpoint. Can you confirm whether this is happening on all clients or only one? Could you also dump your crushmap and upload it please?

[...]

It's always Segmentation fault at mon nodes,but it't ok at osd nodes.

Actions #10

Updated by Brad Hubbard almost 5 years ago

Interesting, thanks Han.

Would you mind uploading an sosreport from one node where the failure does happen and one node where the failure does not happen so I can compare the two?

Actions #11

Updated by Brad Hubbard almost 5 years ago

  • Status changed from New to Need More Info
Actions #12

Updated by qingbo han almost 5 years ago

Brad Hubbard wrote:

Interesting, thanks Han.

Would you mind uploading an sosreport from one node where the failure does happen and one node where the failure does not happen so I can compare the two?

very sorry for the late reply。I had used ceph-post-file to put sosreport files ,and the id is 56e38f08-23d9-4671-8652-79e57ab53217.

Actions #13

Updated by Brad Hubbard almost 5 years ago

Hello Han,

I don't see any glaring differences in the binaries so far but I did notice this in the dmesg output.

[1231643.832664] ceph[1650]: segfault at 2b0e13823cb8 ip 00002b0e044f1263 sp 00002b0e13823cc0 error 7 in libceph-common.so.0[2b0e04089000+87a000]                                                                                            
[1490963.857776] ceph[28532]: segfault at 2b5c70420cb8 ip 00002b5c610ee263 sp 00002b5c70420cc0 error 7 in libceph-common.so.0[2b5c60c86000+87a000]                                                                                           
[1491956.978979] python[9784]: segfault at 2b5fe87dacb8 ip 00002b5fd9779263 sp 00002b5fe87dacc0 error 7 in libceph-common.so.0[2b5fd9311000+87a000]                                                                                          
[1492434.520028] python[28350]: segfault at 2acda0746cb8 ip 00002acd916da263 sp 00002acda0746cc0 error 7 in libceph-common.so.0[2acd91272000+87a000]                                                                                         
[1492888.343449] python[19340]: segfault at 2b12e1d10cb8 ip 00002b12d2caf263 sp 00002b12e1d10cc0 error 7 in libceph-common.so.0[2b12d2847000+87a000]                                                                                         
[1567005.556286] ceph[21339]: segfault at 2b364b743cb8 ip 00002b363c411263 sp 00002b364b743cc0 error 7 in libceph-common.so.0[2b363bfa9000+87a000]                                                                                           
[1567047.292060] ceph[22418]: segfault at 2b0054fbdcb8 ip 00002b0045c8b263 sp 00002b0054fbdcc0 error 7 in libceph-common.so.0[2b0045823000+87a000]                                                                                           
[1567382.981259] ceph[3258]: segfault at 2ab71115fcb8 ip 00002ab701e2d263 sp 00002ab71115fcc0 error 7 in libceph-common.so.0[2ab7019c5000+87a000]                                                                                            
[1726226.337424] ceph[3608]: segfault at 2b68cf7cdcb8 ip 00002b68c049b263 sp 00002b68cf7cdcc0 error 7 in libceph-common.so.0[2b68c0033000+87a000]

Note that some of the segfaults indicate a segfault in ceph and some in python indicating there is at least some difference in the two crashes. Would you be able to try and gather several coredumps and upload examples of at least the two different crashes? You can use dmesg output to tell whether a segfault is in ceph or python so you can work out when you have gotten one of each.

Actions #14

Updated by qingbo han almost 5 years ago

hi Brad Hubbard
I failed to reproduce segfault in python several times.I had upload coredump in ceph, the id is
3c92d277-d7a1-4c1a-a06c-d2d836a8e819.

Thanks

Actions #15

Updated by Brad Hubbard almost 5 years ago

Still looking into this. The issue in the new core is the same as the original coredump.

Actions #16

Updated by Brad Hubbard almost 5 years ago

Hello Han,

Many thanks to Radoslaw Zarzynski for the fruitful discussion we had regarding this issue last night. It allowed me to make considerable progress. Radek pointed out to me that it looked like there was a large allocation happening on the stack just prior to the segfault. Immediately it was evident what the likely problem was and I'll work through our current theory now.

(gdb) bt
#0  0x00002b1152c18263 in do_rule<std::vector<unsigned int, mempool::pool_allocator<(mempool::pool_index_t)15, unsigned int> > > (choose_args_index=6, weight=std::vector of length 231, capacity 256 = {...}, maxout=3, out=std::vector of length 0, capacity 0, x=-1772799189, rule=3, this=0x2b1164142628) at /usr/src/debug/ceph-12.2.10/src/crush/CrushWrapper.h:1498
#1  OSDMap::_pg_to_raw_osds (this=this@entry=0x2b1164142050, pool=..., pg=..., osds=osds@entry=0x2b116214baf0, ppps=ppps@entry=0x2b116214bae4) at /usr/src/debug/ceph-12.2.10/src/osd/OSDMap.cc:2071
#2  0x00002b1152c1a0b6 in OSDMap::_pg_to_up_acting_osds (this=0x2b1164142050, pg=..., up=up@entry=0x2b116214bca0, up_primary=up_primary@entry=0x2b116214bc44, acting=acting@entry=0x2b116214bcc0, acting_primary=acting_primary@entry=0x2b116214bc48, raw_pg_to_pg=raw_pg_to_pg@entry=true) at /usr/src/debug/ceph-12.2.10/src/osd/OSDMap.cc:2306
#3  0x00002b1152515bfd in pg_to_up_acting_osds (acting_primary=0x2b116214bc48, acting=0x2b116214bcc0, up_primary=0x2b116214bc44, up=0x2b116214bca0, pg=..., this=<optimized out>) at /usr/src/debug/ceph-12.2.10/src/osd/OSDMap.h:1159
#4  Objecter::_calc_target (this=0x2b11641419a0, t=t@entry=0x2b1164154fa8, con=con@entry=0x0, any_change=any_change@entry=true) at /usr/src/debug/ceph-12.2.10/src/osdc/Objecter.cc:2847
#5  0x00002b1152524953 in Objecter::_calc_command_target (this=this@entry=0x2b11641419a0, c=c@entry=0x2b1164154ef0, sul=...) at /usr/src/debug/ceph-12.2.10/src/osdc/Objecter.cc:4859
#6  0x00002b11525397c9 in Objecter::submit_command (this=0x2b11641419a0, c=0x2b1164154ef0, ptid=0x2b116214c1d8) at /usr/src/debug/ceph-12.2.10/src/osdc/Objecter.cc:4814
#7  0x00002b11524f5c2e in pg_command (onfinish=0x2b1164068110, prs=0x2b116214c3a0, poutbl=0x2b116214c440, ptid=0x2b116214c1d8, inbl=..., cmd=std::vector of length 1, capacity 1 = {...}, pgid=..., this=<optimized out>) at /usr/src/debug/ceph-12.2.10/src/osdc/Objecter.h:2226
#8  librados::RadosClient::pg_command (this=this@entry=0x2b1164066700, pgid=..., cmd=std::vector of length 1, capacity 1 = {...}, inbl=..., poutbl=poutbl@entry=0x2b116214c440, prs=prs@entry=0x2b116214c3a0) at /usr/src/debug/ceph-12.2.10/src/librados/RadosClient.cc:921
#9  0x00002b11524c00fb in rados_pg_command (cluster=0x2b1164066700, pgstr=pgstr@entry=0x2b1161d2a474 "6.7f3", cmd=cmd@entry=0x2b116414ba20, cmdlen=cmdlen@entry=1, inbuf=inbuf@entry=0x2b114944752c "", inbuflen=inbuflen@entry=0, outbuf=outbuf@entry=0x2b116214c540, outbuflen=outbuflen@entry=0x2b116214c548, outs=outs@entry=0x2b116214c550, outslen=outslen@entry=0x2b116214c558) at /usr/src/debug/ceph-12.2.10/src/librados/librados.cc:3530
#10 0x00002b11521ecbbb in __pyx_pf_5rados_5Rados_56pg_command (__pyx_v_timeout=<optimized out>, __pyx_v_inbuf=0x2b1149447508, __pyx_v_cmd=0x2b1161d2d9e0, __pyx_v_pgid=0x2b1161d2a450, __pyx_v_self=0x2b115edf4590) at /usr/src/debug/ceph-12.2.10/build/src/pybind/rados/pyrex/rados.c:14381
#11 __pyx_pw_5rados_5Rados_57pg_command (__pyx_v_self=0x2b115edf4590, __pyx_args=<optimized out>, __pyx_kwds=<optimized out>) at /usr/src/debug/ceph-12.2.10/build/src/pybind/rados/pyrex/rados.c:14172
#12 0x00002b114971c20a in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#13 0x00002b114971e03d in dfs () from /lib64/libpython2.7.so.1.0
#14 0x00002b1161d255d8 in ?? ()
#15 0x0000000000000000 in ?? ()
(gdb) x/i $pc
=> 0x2b1152c18263 <OSDMap::_pg_to_raw_osds(pg_pool_t const&, pg_t, std::vector<int, std::allocator<int> >*, unsigned int*) const+259>:  callq  0x2b1152e1b750 <crush_init_workspace>

The instruction we segfaulted on was a call to the function crush_init_workspace. At first glance this looks odd as there is nothing wrong with the address of the function crush_init_workspace but Radek reminded me it was probably the preamble where we are essentially doing one or more pushes followed by a jmp.

Looking at the code working up to the call.

1493      void do_rule(int rule, int x, vector<int>& out, int maxout,
1494                   const WeightVector& weight,
1495                   uint64_t choose_args_index) const {
1496        int rawout[maxout];
1497        char work[crush_work_size(crush, maxout)];
1498        crush_init_workspace(crush, work);

Immediately the array allocations on the stack jump out. maxout = 3 so the first allocation shouldn't be a problem but let's take a closer look at the allocation of the "work" array and how 'crush_work_size' works.

86      static inline size_t crush_work_size(const struct crush_map *map,
87                                           int result_max) {
88              return map->working_size + result_max * 3 * sizeof(__u32);
89      }

(gdb) p crush->working_size + maxout * 3 * sizeof(__u32)
$21 = 2100508

That looks like we are trying to allocate over 2MB on the stack. Let's look at the assembly.

   0x00002b1152c18247 <+231>:   lea    0xf(%rax,%rcx,4),%rax                                                          
   0x00002b1152c1824c <+236>:   mov    %r10,-0x78(%rbp)                                                                                                                                                                                      
   0x00002b1152c18250 <+240>:   and    $0xfffffffffffffff0,%rax                                                       
   0x00002b1152c18254 <+244>:   sub    %rax,%rsp     
   0x00002b1152c18257 <+247>:   lea    0x18(%rsp),%rcx
   0x00002b1152c1825c <+252>:   mov    %rcx,%rsi                                                                                                                                                                                             
   0x00002b1152c1825f <+255>:   mov    %rcx,-0x70(%rbp)
=> 0x00002b1152c18263 <+259>:   callq  0x2b1152e1b750 <crush_init_workspace>

So it looks like %rax should hold the result of the 'crush_work_size' and that it is then moving the stack pointer by that amount.

(gdb) i r rax rbp rsp
rax            0x200d20 2100512
rbp            0x2b116214baa0   0x2b116214baa0
rsp            0x2b1161f4acc0   0x2b1161f4acc0

(gdb) p/d 0x2b116214baa0-0x2b1161f4acc0
$22 = 2100704

So the distance between our stack pointer and our base pointer is the size of our "work" array plus a bit (note, it's not clear to me why we are getting figures that are close, but not exact here, some form of rounding maybe?).

We can use 'info target' to work out the memory segments containing the addresses held in %rbp and %rsp and then use readelf on the core to see the attributes of the memory.

0x00002b1161f4a000 - 0x00002b1161f4b000 is load300       // %rsp
0x00002b1161f4e000 - 0x00002b116214e000 is load303       // %rbp

# readelf --program-headers  core.28729|egrep -A1 '(0x00002b1161f4a000|0x00002b1161f4e000)'
  LOAD           0x0000000008c02000 0x00002b1161f4a000 0x0000000000000000
                 0x0000000000001000 0x0000000000001000  R      1000
--
  LOAD           0x0000000008c06000 0x00002b1161f4e000 0x0000000000000000
                 0x0000000000200000 0x0000000000200000  RW     1000

So it looks to me like the stack pointer (%rsp) has been positioned in a segment of memory that is read only and when we do the push in the preamble to calling the function 'crush_work_size' we are trying to write to this memory. This theory would explain why the error in dmesg output is 7.

$ grep segfault sosreport-qbs-monitor-online001-hbaz2-2019-07-02-xauyzgm/sos_commands/kernel/dmesg|head -5
[1231643.832664] ceph[1650]: segfault at 2b0e13823cb8 ip 00002b0e044f1263 sp 00002b0e13823cc0 error 7 in libceph-common.so.0[2b0e04089000+87a000]
[1490963.857776] ceph[28532]: segfault at 2b5c70420cb8 ip 00002b5c610ee263 sp 00002b5c70420cc0 error 7 in libceph-common.so.0[2b5c60c86000+87a000]
[1491956.978979] python[9784]: segfault at 2b5fe87dacb8 ip 00002b5fd9779263 sp 00002b5fe87dacc0 error 7 in libceph-common.so.0[2b5fd9311000+87a000]
[1492434.520028] python[28350]: segfault at 2acda0746cb8 ip 00002acd916da263 sp 00002acda0746cc0 error 7 in libceph-common.so.0[2acd91272000+87a000]
[1492888.343449] python[19340]: segfault at 2b12e1d10cb8 ip 00002b12d2caf263 sp 00002b12e1d10cc0 error 7 in libceph-common.so.0[2b12d2847000+87a000]
+1     protection fault in a mapped area (eg writing to a read-only mapping)
+2     write (instead of a read)
+4     user mode access (instead of kernel mode access)

This theory also aligns nicely with the following error I noticed in the valgrind output I asked you to collect earlier (note the value it flags is the value in %rax).

==4194== Warning: client switching stacks?  SP change: 0x1f5eb9e0 --> 0x1f3eacc0                                                                                                                                                             
==4194==          to suppress, use: --max-stackframe=2100512 or greater                                                                                                                                                                      
==4194==·                                                                                                                                                                                                                                    
==4194== Process terminating with default action of signal 11 (SIGSEGV)                                                                                                                                                                      
==4194==  Bad permissions for mapped region at address 0x1F3EACB8                                                                                                                                                                            
==4194==    at 0xF6D8263: OSDMap::_pg_to_raw_osds(pg_pool_t const&, pg_t, std::vector<int, std::allocator<int> >*, unsigned int*) const (in /usr/lib64/ceph/libceph-common.so.0)                                                             
==4194==·                                                                                                                                                                                                                                    
==4194== Process terminating with default action of signal 11 (SIGSEGV)                                                                                                                                                                      
==4194==  Bad permissions for mapped region at address 0x1F3EACB0                                                                                                                                                                            
==4194==    at 0x4A24720: _vgnU_freeres (vg_preloaded.c:59)

This is all just a theory right now but we may be able to prove it to some extent by running the program with a larger stack size and seeing if it still segfaults. I can also work on a version that does the allocation for 'work' on the heap avoiding the stack allocation altogether. Adjusting the stack for the program should be acheivable by using ulimit. You can see the current value with 'ulimit -s' (should be 8192) and you can set a new value with something like 'ulimit -s 16384' and try running the pg query again. Why isn't this seen on your other machines? Possibly due to the way the memory is being laid out the other machines just are able to "get away with it". This code remains the same in master as well so it's not clear why others haven't seen this as well. One possibility though is that there is something unusual about your crushmap (specifically the high value of 'max_buckets' which is used in the calculation of 'working_size') and I'm looking into that and should have more for you soon. I'll also look into a patch that does this memory allocation on the heap and build you some test packages if you would be interested in testing with them?

(gdb) p *crush
$1 = {
  buckets = 0x2b1178402010, 
  rules = 0x2b117401a120, 
  max_buckets = 262144,
  max_rules = 4, 
  max_devices = 231, 
  choose_local_tries = 0, 
  choose_local_fallback_tries = 0, 
  choose_total_tries = 50, 
  chooseleaf_descend_once = 1, 
  chooseleaf_vary_r = 1 '\001', 
  chooseleaf_stable = 1 '\001', 
  working_size = 2100472, 
  straw_calc_version = 1 '\001', 
  allowed_bucket_algs = 54, 
  choose_tries = 0x0
}
Actions #17

Updated by qingbo han almost 5 years ago

hi Brad Hubbard:
I think your theory is correct. I run ceph pg query correctly when I ulimit -s 16384.You said something unusual about our crushmap,
do you need me to provide info ahout crushmap?

Actions #18

Updated by Brad Hubbard almost 5 years ago

qingbo han wrote:

hi Brad Hubbard:
I think your theory is correct. I run ceph pg query correctly when I ulimit -s 16384.You said something unusual about our crushmap,

Excellent news Han, thanks! That's definitely the bug then. I'll get started on a fix and will keep this tracker updated with the solution.

do you need me to provide info ahout crushmap?

Maybe just some insight into how the bucket IDs ended up the way they are. I've included all your bucket IDs below and noticed quite early on that the range of bucket IDs is quite large. Could you briefly explain how you ended up with such diverse bucket IDs so we can understand the reasoning better? Please note that this bucket ID scheme is not at all prohibited, or even discouraged, but I guess the original author of the code did not anticipate the use of such high values and that is what has led us to this bug.

$ jq '.crushmap|.buckets|.[]|.id' ceph-report                                             
-1    
-2    
-3    
-4     
-5     
-6     
-7     
-8     
-9
-10
-11
-12
-13
-14
-15
-16
-17
-18
-19
-20
-21
-22
-23
-24
-25
-26
-27
-28
-29
-30
-31
-32
-33
-34
-35
-36
-37
-38
-39
-40
-41
-42
-43
-44
-45
-46
-47
-48
-49
-50
-10001
-10002
-10003
-10004
-20001
-20002
-20003
-100001
-100002
-100003
-100004
-200001
-200002
-200003
Actions #19

Updated by Brad Hubbard almost 5 years ago

  • Status changed from Need More Info to 12
  • Backport set to nautilus, mimic, luminous
Actions #20

Updated by qingbo han almost 5 years ago

Hi Brad Hubbard
We manually generated some buckets after deployed the cluster.In order to avoid id repetition,we give these buckets very large value

Actions #21

Updated by Brad Hubbard almost 5 years ago

I understand, thanks Han.

Actions #22

Updated by Patrick Donnelly over 4 years ago

  • Status changed from 12 to New
Actions

Also available in: Atom PDF