<< index Autobuilder log for (e27cf4139fbe895ef4d1817365275e6a50d603d8): >>> Starting at: Wed Oct 8 11:01:44 UTC 2014 >>> Commit: e27cf4139fbe895ef4d1817365275e6a50d603d8 >>> Switching git branch... --START-IGNORE-WARNINGS HEAD is now at da074f0 Merge pull request #2668 from theanalyst/fix/mailmap-again Previous HEAD position was da074f0... Merge pull request #2668 from theanalyst/fix/mailmap-again HEAD is now at e27cf41... qa: cephtool tests for `tell mds.X` HEAD is now at e27cf41 qa: cephtool tests for `tell mds.X` --STOP-IGNORE-WARNINGS >>> Cleaning... >>> Building... + mydir=/srv/autobuild-ceph + export CEPH_EXTRA_CONFIGURE_ARGS= --without-cryptopp + hostname + grep -q ^gitbuilder- + hostname + grep -q -- -notcmalloc + hostname + grep -q -- -gcov + hostname + grep -q -- ceph-deb- + hostname + grep -q -- ceph-tarball- + exec /srv/autobuild-ceph/build-ceph.sh + set -e + git clean -fdx + git reset --hard HEAD is now at e27cf41 qa: cephtool tests for `tell mds.X` + git submodule foreach git clean -fdx && git reset --hard + rm -rf ceph-object-corpus + rm -rf src/leveldb + rm -rf src/libs3 + rm -rf src/mongoose + rm -rf src/civetweb + rm -rf src/rocksdb + rm -rf src/erasure-code/jerasure/gf-complete + rm -rf src/erasure-code/jerasure/jerasure + rm -rf .git/modules/ + /srv/git/bin/git submodule sync Synchronizing submodule url for 'ceph-object-corpus' Synchronizing submodule url for 'src/civetweb' Synchronizing submodule url for 'src/erasure-code/jerasure/gf-complete' Synchronizing submodule url for 'src/erasure-code/jerasure/jerasure' Synchronizing submodule url for 'src/libs3' Synchronizing submodule url for 'src/rocksdb' + /srv/autobuild-ceph/use-mirror.sh + /srv/git/bin/git submodule update --init Cloning into 'ceph-object-corpus'... Submodule path 'ceph-object-corpus': checked out 'bb3cee6b85b93210af5fb2c65a33f3000e341a11' Cloning into 'src/civetweb'... Submodule path 'src/civetweb': checked out '45da9c5f9052e82a9368b92e9bfb48878fff844f' Cloning into 'src/erasure-code/jerasure/gf-complete'... Submodule path 'src/erasure-code/jerasure/gf-complete': checked out '191e7105b2b75f7f48ef23dfab9ae72275363168' Cloning into 'src/erasure-code/jerasure/jerasure'... Submodule path 'src/erasure-code/jerasure/jerasure': checked out '8fe20c6608385d6a1f38db89aec5cba85ccf04ac' Cloning into 'src/libs3'... Submodule path 'src/libs3': checked out 'dcf98ff04bc5dacd5d45854a32870d86dd7b26c7' Cloning into 'src/rocksdb'... Submodule path 'src/rocksdb': checked out '05da5930f3a130c4dea879a85a26f2c8ac7465c4' + git clean -fdx + echo --START-IGNORE-WARNINGS --START-IGNORE-WARNINGS + [ ! -x autogen.sh ] + ./autogen.sh + set -e + test -f src/ceph.in + which libtoolize + [ /usr/bin/libtoolize ] + LIBTOOLIZE=libtoolize + test -d .git + git submodule update --init Submodule 'ceph-object-corpus' () registered for path 'ceph-object-corpus' Submodule 'src/civetweb' () registered for path 'src/civetweb' Submodule 'src/erasure-code/jerasure/gf-complete' () registered for path 'src/erasure-code/jerasure/gf-complete' Submodule 'src/erasure-code/jerasure/jerasure' () registered for path 'src/erasure-code/jerasure/jerasure' Submodule 'src/libs3' () registered for path 'src/libs3' Submodule 'src/rocksdb' () registered for path 'src/rocksdb' + rm -f config.cache + aclocal -I m4 --install aclocal: installing `m4/libtool.m4' from `/usr/share/aclocal/libtool.m4' aclocal: installing `m4/ltoptions.m4' from `/usr/share/aclocal/ltoptions.m4' aclocal: installing `m4/ltsugar.m4' from `/usr/share/aclocal/ltsugar.m4' aclocal: installing `m4/ltversion.m4' from `/usr/share/aclocal/ltversion.m4' aclocal: installing `m4/lt~obsolete.m4' from `/usr/share/aclocal/lt~obsolete.m4' aclocal: installing `m4/pkg.m4' from `/usr/share/aclocal/pkg.m4' + check_for_pkg_config + which pkg-config + return + libtoolize --force --copy libtoolize: putting auxiliary files in `.'. libtoolize: copying file `./ltmain.sh' libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'. libtoolize: copying file `m4/libtool.m4' libtoolize: copying file `m4/ltoptions.m4' libtoolize: copying file `m4/ltsugar.m4' libtoolize: copying file `m4/ltversion.m4' libtoolize: copying file `m4/lt~obsolete.m4' + aclocal -I m4 --install + autoconf + autoheader + automake -a --add-missing -Wall configure.ac:33: installing `./ar-lib' configure.ac:37: installing `./compile' configure.ac:29: installing `./config.guess' configure.ac:29: installing `./config.sub' configure.ac:36: installing `./install-sh' configure.ac:36: installing `./missing' src/test/Makefile.am:248: patsubst %,$(srcdir: non-POSIX variable name src/test/Makefile.am:248: (probably a GNU make extension) src/Makefile.am:36: `src/test/Makefile.am' included from here src/Makefile.am: installing `./depcomp' src/Makefile.am:723: installing `./py-compile' + cd src/gtest + autoreconf -fvi autoreconf: Entering directory `.' autoreconf: configure.ac: not using Gettext autoreconf: running: aclocal --force -I m4 autoreconf: configure.ac: tracing autoreconf: running: libtoolize --install --copy --force libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, `build-aux'. libtoolize: copying file `build-aux/config.guess' libtoolize: copying file `build-aux/config.sub' libtoolize: copying file `build-aux/install-sh' libtoolize: copying file `build-aux/ltmain.sh' libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'. libtoolize: copying file `m4/libtool.m4' libtoolize: copying file `m4/ltoptions.m4' libtoolize: copying file `m4/ltsugar.m4' libtoolize: copying file `m4/ltversion.m4' libtoolize: copying file `m4/lt~obsolete.m4' autoreconf: running: /usr/bin/autoconf --force autoreconf: running: /usr/bin/autoheader --force autoreconf: running: automake --add-missing --copy --force-missing configure.ac:24: installing `build-aux/missing' Makefile.am: installing `build-aux/depcomp' autoreconf: Leaving directory `.' + cd src/rocksdb + autoreconf -fvi autoreconf: Entering directory `.' autoreconf: configure.ac: not using Gettext autoreconf: running: aclocal --force -I m4 autoreconf: configure.ac: tracing autoreconf: running: libtoolize --install --copy --force libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'. libtoolize: copying file `m4/libtool.m4' libtoolize: copying file `m4/ltoptions.m4' libtoolize: copying file `m4/ltsugar.m4' libtoolize: copying file `m4/ltversion.m4' libtoolize: copying file `m4/lt~obsolete.m4' autoreconf: running: /usr/bin/autoconf --force autoreconf: running: /usr/bin/autoheader --force autoreconf: running: automake --add-missing --copy --force-missing autoreconf: Leaving directory `.' + exit + autoconf + echo --STOP-IGNORE-WARNINGS --STOP-IGNORE-WARNINGS + [ ! -x configure ] + CFLAGS=-fno-omit-frame-pointer -g -O2 CXXFLAGS=-fno-omit-frame-pointer -g ./configure --with-debug --with-radosgw --with-fuse --with-tcmalloc --with-libatomic-ops --with-gtk2 --with-hadoop --with-profiler --enable-cephfs-java --with-librocksdb-static=check checking for git... yes configure: RPM_RELEASE='267.ge27cf41' checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for ar... ar checking the archiver (ar) interface... ar checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking whether gcc and cc understand -c and -o together... yes checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking dependency style of gcc... gcc3 checking dependency style of gcc... (cached) gcc3 checking whether make supports nested variables... yes checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking dependency style of g++... (cached) gcc3 checking if compiler is clang... no checking whether make sets $(MAKE)... (cached) yes we have a modern and working yasm we are x86_64 we are not x32 yasm doesn't build the isa-l stuff checking whether gcc accepts -Wtype-limits... yes checking whether gcc accepts -Wignored-qualifiers... yes checking whether C compiler accepts -fvar-tracking-assignments... yes checking whether the compiler supports static_cast<>... yes checking whether gcc recognizes __func__... yes checking whether gcc recognizes __PRETTY_FUNCTION__... yes checking for the pthreads library -lpthreads... no checking whether pthreads work without any flags... no checking whether pthreads work with -Kthread... no checking whether pthreads work with -kthread... no checking for the pthreads library -llthread... no checking whether pthreads work with -pthread... yes checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE checking if more special flags are required for pthreads... no checking whether to check for GCC pthread/shared inconsistencies... yes checking whether -pthread is sufficient with -shared... yes checking whether what we have so far is sufficient with -nostdlib... no checking whether -lpthread saves the day... yes checking for uuid_parse in -luuid... yes checking blkid/blkid.h usability... yes checking blkid/blkid.h presence... yes checking for blkid/blkid.h... yes checking for blkid_devno_to_wholedisk in -lblkid... yes checking libudev.h usability... yes checking libudev.h presence... yes checking for libudev.h... yes checking for udev_monitor_receive_device in -ludev... yes checking for resolv.h... yes checking if res_nquery will link (LIBS=)... no checking if res_nquery will link (LIBS=-lresolv)... yes checking for add_key in -lkeyutils... yes checking for pow in -lm... yes checking for syncfs... yes checking for pkg-config... /usr/bin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for CRYPTOPP... yes checking for NSS... yes configure: using cryptopp for cryptography checking for ProfilerFlush in -lprofiler... yes checking for FCGX_Init in -lfcgi... yes checking for XML_Parse in -lexpat... yes checking for curl_easy_init in -lcurl... yes checking fastcgi/fcgiapp.h usability... no checking fastcgi/fcgiapp.h presence... no checking for fastcgi/fcgiapp.h... no checking for curl_multi_wait in -lcurl... yes checking for fuse_main in -lfuse... yes checking for fuse_getgroups... yes checking for malloc in -ltcmalloc... yes find: `/usr/lib/jvm/java/': No such file or directory find: `/usr/lib/jvm/java-gcj/': No such file or directory You have no CLASSPATH, I hope it is good checking for javac... javac checking if javac works... yes checking for javah... /usr/bin/javah checking for jar... jar configure: classpath - :/usr/share/java/junit4.jar checking jni.h usability... yes checking jni.h presence... yes checking for jni.h... yes checking for LIBEDIT... yes checking atomic_ops.h usability... yes checking atomic_ops.h presence... yes checking for atomic_ops.h... yes checking size of AO_t... 8 checking for snappy_compress in -lsnappy... yes checking for leveldb_open in -lleveldb... yes checking leveldb/filter_policy.h usability... yes checking leveldb/filter_policy.h presence... yes checking for leveldb/filter_policy.h... yes checking whether C compiler accepts -msse... yes checking whether C compiler accepts -msse2... yes checking whether C compiler accepts -msse3... yes checking whether C compiler accepts -mssse3... yes checking whether C compiler accepts -mpclmul... yes checking whether C compiler accepts -msse4.1... yes checking whether C compiler accepts -msse4.2... yes checking whether g++ supports C++11 features by default... no checking whether g++ supports C++11 features with -std=gnu++11... no checking whether g++ supports C++11 features with -std=gnu++0x... no checking whether g++ supports C++11 features with -std=c++11... no checking whether g++ supports C++11 features with -std=c++0x... no configure: No compiler with C++11 support was found checking for io_submit in -laio... yes checking libaio.h usability... yes checking libaio.h presence... yes checking for libaio.h... yes checking xfs/xfs.h usability... yes checking xfs/xfs.h presence... yes checking for xfs/xfs.h... yes checking for XFS_XFLAG_EXTSIZE in xfs/xfs.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking for ANSI C header files... (cached) yes checking for sys/wait.h that is POSIX.1 compatible... yes checking boost/spirit/include/classic_core.hpp usability... yes checking boost/spirit/include/classic_core.hpp presence... yes checking for boost/spirit/include/classic_core.hpp... yes checking boost/random/discrete_distribution.hpp usability... no checking boost/random/discrete_distribution.hpp presence... no checking for boost/random/discrete_distribution.hpp... no checking boost/statechart/state.hpp usability... yes checking boost/statechart/state.hpp presence... yes checking for boost/statechart/state.hpp... yes checking boost/program_options/option.hpp usability... yes checking boost/program_options/option.hpp presence... yes checking for boost/program_options/option.hpp... yes checking for main in -lboost_system-mt... yes checking for main in -lboost_thread-mt... yes checking for main in -lboost_program_options-mt... yes checking for struct fiemap_extent.fe_logical... yes checking arpa/inet.h usability... yes checking arpa/inet.h presence... yes checking for arpa/inet.h... yes checking arpa/nameser_compat.h usability... yes checking arpa/nameser_compat.h presence... yes checking for arpa/nameser_compat.h... yes checking linux/version.h usability... yes checking linux/version.h presence... yes checking for linux/version.h... yes checking netdb.h usability... yes checking netdb.h presence... yes checking for netdb.h... yes checking netinet/in.h usability... yes checking netinet/in.h presence... yes checking for netinet/in.h... yes checking sys/file.h usability... yes checking sys/file.h presence... yes checking for sys/file.h... yes checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking sys/mount.h usability... yes checking sys/mount.h presence... yes checking for sys/mount.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/statvfs.h usability... yes checking sys/statvfs.h presence... yes checking for sys/statvfs.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/vfs.h usability... yes checking sys/vfs.h presence... yes checking for sys/vfs.h... yes checking sys/xattr.h usability... yes checking sys/xattr.h presence... yes checking for sys/xattr.h... yes checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking utime.h usability... yes checking utime.h presence... yes checking for utime.h... yes checking for sync_file_range... yes checking for fallocate... yes checking for struct stat.st_mtim.tv_nsec... yes checking for struct stat.st_mtimespec.tv_nsec... no checking for splice... yes checking for F_SETPIPE_SZ in fcntl.h... yes checking for posix_fallocate... yes checking sys/prctl.h usability... yes checking sys/prctl.h presence... yes checking for sys/prctl.h... yes checking for prctl... yes checking for pipe2... yes checking for posix_fadvise... yes checking for fdatasync... yes checking for pthread_spin_init... yes checking for int8_t... yes checking for uint8_t... yes checking for int16_t... yes checking for uint16_t... yes checking for int32_t... yes checking for uint32_t... yes checking for int64_t... yes checking for uint64_t... yes checking linux/types.h usability... yes checking linux/types.h presence... yes checking for linux/types.h... yes checking for __u8... yes checking for __s8... yes checking for __u16... yes checking for __s16... yes checking for __u32... yes checking for __s32... yes checking for __u64... yes checking for __s64... yes checking for __le16... yes checking for __be16... yes checking for __le32... yes checking for __be32... yes checking for __le64... yes checking for __be64... yes checking if lttng-gen-tp is sane... no configure: lttng auto-disabled checking babeltrace/ctf/events.h usability... yes checking babeltrace/ctf/events.h presence... yes checking for babeltrace/ctf/events.h... yes checking babeltrace/babeltrace.h usability... yes checking babeltrace/babeltrace.h presence... yes checking for babeltrace/babeltrace.h... yes checking whether BT_CLOCK_REAL is declared... no configure: babeltrace auto-disabled checking whether strerror_r is declared... yes checking for strerror_r... yes checking whether strerror_r returns char *... yes checking for a Python interpreter with version >= 2.4... python checking for python... /usr/bin/python checking for python version... 2.7 checking for python platform... linux2 checking for python script directory... ${prefix}/lib/python2.7/dist-packages checking for python extension module directory... ${exec_prefix}/lib/python2.7/dist-packages configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating src/ocf/Makefile config.status: creating src/ocf/ceph config.status: creating src/ocf/rbd config.status: creating src/java/Makefile config.status: creating src/tracing/Makefile config.status: creating man/Makefile config.status: creating ceph.spec config.status: creating src/acconfig.h config.status: executing depfiles commands config.status: executing libtool commands === configuring in src/gtest (/srv/autobuild-ceph/gitbuilder.git/build/src/gtest) configure: running /bin/bash ./configure --disable-option-checking '--prefix=/usr/local' '--with-debug' '--with-radosgw' '--with-fuse' '--with-tcmalloc' '--with-libatomic-ops' '--with-gtk2' '--with-hadoop' '--with-profiler' '--enable-cephfs-java' '--with-librocksdb-static=check' 'CFLAGS=-fno-omit-frame-pointer -g -O2' 'CXXFLAGS=-fno-omit-frame-pointer -g' --cache-file=/dev/null --srcdir=. checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for python... /usr/bin/python checking for the pthreads library -lpthreads... no checking whether pthreads work without any flags... no checking whether pthreads work with -Kthread... no checking whether pthreads work with -kthread... no checking for the pthreads library -llthread... no checking whether pthreads work with -pthread... yes checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE checking if more special flags are required for pthreads... no checking whether to check for GCC pthread/shared inconsistencies... yes checking whether -pthread is sufficient with -shared... yes configure: creating ./config.status config.status: creating Makefile config.status: creating scripts/gtest-config config.status: creating build-aux/config.h config.status: executing depfiles commands config.status: executing libtool commands + [ ! -e Makefile ] + set -- + export CCACHE_DIR=/srv/autobuild-ceph/gitbuilder.git/build/../../ccache + command -v ccache + [ ! -e /srv/autobuild-ceph/gitbuilder.git/build/../../ccache ] + set -- CC=ccache gcc CXX=ccache g++ + grep -c processor /proc/cpuinfo + NCPU=48 + ionice -c3 nice -n20 make -j48 CC=ccache gcc CXX=ccache g++ Making all in . make[1]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build' make[2]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/gtest' depbase=`echo src/gtest-all.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\ ccache g++ -DHAVE_CONFIG_H -I. -I./build-aux -I. -I./include -DGTEST_HAS_TR1_TUPLE=0 -pthread -DGTEST_HAS_PTHREAD=1 -fno-omit-frame-pointer -g -MT src/gtest-all.o -MD -MP -MF $depbase.Tpo -c -o src/gtest-all.o src/gtest-all.cc &&\ mv -f $depbase.Tpo $depbase.Po depbase=`echo src/gtest_main.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\ ccache g++ -DHAVE_CONFIG_H -I. -I./build-aux -I. -I./include -DGTEST_HAS_TR1_TUPLE=0 -pthread -DGTEST_HAS_PTHREAD=1 -fno-omit-frame-pointer -g -MT src/gtest_main.o -MD -MP -MF $depbase.Tpo -c -o src/gtest_main.o src/gtest_main.cc &&\ mv -f $depbase.Tpo $depbase.Po rm -f lib/libgtest.a ar cru lib/libgtest.a src/gtest-all.o ranlib lib/libgtest.a rm -f lib/libgtest_main.a ar cru lib/libgtest_main.a src/gtest_main.o lib/libgtest.a ranlib lib/libgtest_main.a make[2]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/gtest' make[1]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build' Making all in src make[1]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' rm -f init-ceph init-ceph.tmp sed -e 's|@bindir[@]|/usr/local/bin|g' -e 's|@sbindir[@]|/usr/local/sbin|g' -e 's|@libdir[@]|/usr/local/lib|g' -e 's|@sysconfdir[@]|/usr/local/etc|g' -e 's|@datadir[@]|/usr/local/share/ceph|g' -e 's|@prefix[@]|/usr/local|g' -e 's|@@GCOV_PREFIX_STRIP[@][@]|5|g' './init-ceph.in' >init-ceph.tmp chmod +x init-ceph.tmp chmod a-w init-ceph.tmp mv init-ceph.tmp init-ceph make all-recursive make[2]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' Making all in ocf make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/ocf' make[3]: Nothing to be done for `all'. make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/ocf' Making all in java make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' export CLASSPATH=java/ ; \ javac -classpath java -source 1.5 -target 1.5 -Xlint:-options java/com/ceph/fs/*.java export CLASSPATH=java/ ; \ /usr/bin/javah -classpath java -jni -o native/com_ceph_fs_CephMount.h com.ceph.fs.CephMount make all-am make[4]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' jar cf libcephfs.jar -C java com/ceph/fs/CephMount.class -C java com/ceph/fs/CephStat.class -C java com/ceph/fs/CephStatVFS.class -C java com/ceph/fs/CephNativeLoader.class -C java com/ceph/fs/CephNotMountedException.class -C java com/ceph/fs/CephFileAlreadyExistsException.class -C java com/ceph/fs/CephAlreadyMountedException.class -C java com/ceph/fs/CephNotDirectoryException.class -C java com/ceph/fs/CephPoolException.class -C java com/ceph/fs/CephFileExtent.class -C java com/ceph/crush/Bucket.class export CLASSPATH=:/usr/share/java/junit4.jar:java/:test/ ; \ javac -source 1.5 -target 1.5 -Xlint:-options test/com/ceph/fs/*.java jar cf libcephfs-test.jar -C test com/ceph/fs/CephDoubleMountTest.class -C test com/ceph/fs/CephMountCreateTest.class -C test com/ceph/fs/CephMountTest.class -C test com/ceph/fs/CephUnmountedTest.class -C test com/ceph/fs/CephAllTests.class make[4]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' Making all in tracing make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make all-am make[4]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make[4]: Nothing to be done for `all-am'. make[4]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' ./check_version ./.git_version CXX erasure-code/libec_jerasure_sse3_la-ErasureCode.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse3_la-cauchy.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse3_la-galois.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse3_la-jerasure.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse3_la-liberation.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse3_la-reed_sol.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_wgen.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_method.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_w16.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_w32.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_w64.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_general.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_w128.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_w4.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_rand.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse3_la-gf_w8.lo CXX erasure-code/jerasure/libec_jerasure_sse3_la-ErasureCodeJerasure.lo CXX erasure-code/libec_jerasure_sse4_la-ErasureCode.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse4_la-cauchy.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse4_la-galois.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse4_la-jerasure.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse4_la-liberation.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_sse4_la-reed_sol.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_wgen.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_method.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_w16.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_w32.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_w64.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_w128.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_general.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_w4.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_rand.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_sse4_la-gf_w8.lo CXX erasure-code/jerasure/libec_jerasure_sse4_la-ErasureCodeJerasure.lo CXX erasure-code/libec_lrc_la-ErasureCode.lo regenerating ./.git_version with e27cf4139fbe895ef4d1817365275e6a50d603d8 v0.86-267-ge27cf41 CXX common/DecayCounter.lo CXX common/LogClient.lo CXX common/LogEntry.lo CXX common/PrebufferedStreambuf.lo CXX common/SloppyCRCMap.lo CXX common/BackTrace.lo CXX common/perf_counters.lo CXX common/Mutex.lo CXX common/OutputDataSocket.lo CXX common/admin_socket.lo CXX common/admin_socket_client.lo CXX common/cmdparse.lo CC common/escape.lo CXX common/io_priority.lo CXX common/Clock.lo CXX common/Throttle.lo CXX common/Timer.lo CXX common/Finisher.lo CXX common/environment.lo CXX common/assert.lo CXX common/run_cmd.lo CXX common/WorkQueue.lo CXX common/ConfUtils.lo CXX common/MemoryModel.lo CC common/armor.lo CXX common/fd.lo CC common/xattr.lo CC common/safe_io.lo CXX common/snap_types.lo CXX common/str_list.lo CXX common/str_map.lo CXX common/errno.lo CXX common/RefCountedObj.lo CXX common/blkdev.lo CXX common/common_init.lo CC common/pipe.lo CXX common/ceph_argparse.lo CXX common/ceph_context.lo CXX common/buffer.lo CXX common/types.lo CXX common/code_environment.lo CXX common/dout.lo CXX common/histogram.lo CXX common/signal.lo CXX common/simple_spin.lo CXX common/Thread.lo CXX common/Formatter.lo CXX common/HeartbeatMap.lo CXX common/config.lo CC common/utf8.lo CC common/mime.lo CXX common/strtol.lo CXX common/page.lo CXX common/lockdep.lo CXX common/hex.lo CXX common/entity_name.lo CXX common/ceph_crypto.lo CXX common/ceph_crypto_cms.lo CXX common/ceph_json.lo CXX common/ipaddr.lo CXX common/pick_address.lo CXX common/util.lo CXX common/TextTable.lo CXX common/ceph_fs.lo CXX common/ceph_hash.lo CXX common/ceph_strings.lo CXX common/ceph_frag.lo CC common/addr_parsing.lo CXX common/hobject.lo CXX common/bloom_filter.lo CC common/linux_version.lo CC common/module.lo CC common/libcommon_crc_la-sctp_crc32.lo CXX common/libcommon_crc_la-crc32c.lo CC common/libcommon_crc_la-crc32c_intel_baseline.lo CC common/libcommon_crc_la-crc32c_intel_fast.lo CPPAS common/libcommon_crc_la-crc32c_intel_fast_asm.lo CPPAS common/libcommon_crc_la-crc32c_intel_fast_zero_asm.lo ./yasm-wrapper: got -DHAVE_CONFIG_H -I. -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE -D__STDC_FORMAT_MACROS -D_GNU_SOURCE -DCEPH_LIBDIR="/usr/local/lib" -DCEPH_PKGLIBDIR="/usr/local/lib/ceph" -DGTEST_HAS_TR1_TUPLE=0 -f elf64 -fno-omit-frame-pointer -g -O2 -MT common/libcommon_crc_la-crc32c_intel_fast_asm.lo -MD -MP -MF common/.deps/libcommon_crc_la-crc32c_intel_fast_asm.Tpo -c common/crc32c_intel_fast_asm.S -fPIC -DPIC -o common/.libs/libcommon_crc_la-crc32c_intel_fast_asm.o ./yasm-wrapper: yasm -I. -f elf64 common/crc32c_intel_fast_asm.S -o common/.libs/libcommon_crc_la-crc32c_intel_fast_asm.o ./yasm-wrapper: got -DHAVE_CONFIG_H -I. -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE -D__STDC_FORMAT_MACROS -D_GNU_SOURCE -DCEPH_LIBDIR="/usr/local/lib" -DCEPH_PKGLIBDIR="/usr/local/lib/ceph" -DGTEST_HAS_TR1_TUPLE=0 -f elf64 -fno-omit-frame-pointer -g -O2 -MT common/libcommon_crc_la-crc32c_intel_fast_zero_asm.lo -MD -MP -MF common/.deps/libcommon_crc_la-crc32c_intel_fast_zero_asm.Tpo -c common/crc32c_intel_fast_zero_asm.S -fPIC -DPIC -o common/.libs/libcommon_crc_la-crc32c_intel_fast_zero_asm.o ./yasm-wrapper: yasm -I. -f elf64 common/crc32c_intel_fast_zero_asm.S -o common/.libs/libcommon_crc_la-crc32c_intel_fast_zero_asm.o CXX auth/Crypto.lo CXX auth/KeyRing.lo CXX auth/RotatingKeyRing.lo CXX libcephfs.lo CXX mon/PGMap.lo CXX mon/Monitor.lo CXX mon/Paxos.lo CXX mon/PaxosService.lo CXX mon/OSDMonitor.lo CXX mon/MDSMonitor.lo CXX mon/MonmapMonitor.lo CXX mon/PGMonitor.lo CXX mon/LogMonitor.lo CXX mon/AuthMonitor.lo CXX mon/Elector.lo CXX mon/MonitorStore.lo CXX mon/HealthMonitor.lo CXX mon/DataHealthService.lo CXX mon/ConfigKeyService.lo CXX common/libos_la-TrackedOp.lo CXX mds/Capability.lo CXX mds/MDS.lo CXX mds/Beacon.lo CXX mds/flock.lo CC mds/locks.lo CXX mds/journal.lo CXX mds/Server.lo CXX mds/Mutation.lo CXX mds/MDCache.lo CXX mds/RecoveryQueue.lo CXX mds/Locker.lo CXX mds/Migrator.lo CXX mds/MDBalancer.lo CXX mds/CDentry.lo CXX mds/CDir.lo CXX mds/CInode.lo CXX mds/LogEvent.lo CXX mds/MDSTable.lo CXX mds/InoTable.lo CXX mds/JournalPointer.lo CXX mds/MDSTableClient.lo CXX mds/MDSTableServer.lo CXX mds/SnapRealm.lo CXX mds/SnapServer.lo CXX mds/snap.lo CXX mds/SessionMap.lo CXX mds/MDSContext.lo CXX mds/MDSAuthCaps.lo CXX mds/MDLog.lo CXX common/TrackedOp.lo CXX osd/libosd_types_la-PGLog.lo CXX osd/libosd_types_la-osd_types.lo CXX osd/libosd_types_la-ECUtil.lo CXX osd/libosd_la-PG.lo CXX osd/libosd_la-ReplicatedPG.lo CXX osd/libosd_la-ReplicatedBackend.lo CXX osd/libosd_la-ECBackend.lo CXX osd/libosd_la-ECMsgTypes.lo CXX osd/libosd_la-ECTransaction.lo CXX osd/libosd_la-PGBackend.lo CXX osd/libosd_la-Ager.lo CXX osd/libosd_la-HitSet.lo CXX osd/libosd_la-OSD.lo CXX osd/libosd_la-OSDCap.lo CXX osd/libosd_la-Watch.lo CXX osd/libosd_la-ClassHandler.lo CXX osd/libosd_la-OpRequest.lo CXX common/libosd_la-TrackedOp.lo CXX osd/libosd_la-SnapMapper.lo CXX client/fuse_ll.lo CC common/secret.lo CXX krbd.lo CXX cls/rbd/cls_rbd.lo CXX cls/lock/cls_lock.lo CXX cls/refcount/cls_refcount.lo CXX cls/version/cls_version.lo CXX cls/log/cls_log.lo CXX cls/statelog/cls_statelog.lo CXX cls/replica_log/cls_replica_log.lo CXX cls/user/cls_user.lo CXX cls/rgw/cls_rgw.lo CC client/test_ioctls.o CXX rgw/rgw_multiparser.o CXX rgw/rgw_jsonparser.o CXX rgw/rgw_common.o CXX rgw/rgw_env.o CXX rgw/rgw_json_enc.o CXX test/erasure-code/ceph_erasure_code_benchmark.o CXX test/erasure-code/ceph_erasure_code.o CXX test/test_mutate.o CXX test/test_rewrite_latency.o CXX test/testmsgr.o CXX test/streamtest.o CXX test/test_trans.o CXX test/testcrypto.o CXX test/testkeys.o CXX test/omap_bench.o CXX test/kv_store_bench.o CXX key_value_store/kv_flat_btree_async.o CXX test/system/rados_list_parallel.o CXX test/system/st_rados_create_pool.o CXX test/system/st_rados_list_objects.o CXX test/system/rados_open_pools_parallel.o CXX test/system/rados_delete_pools_parallel.o CXX test/system/st_rados_delete_pool.o CXX test/system/rados_watch_notify.o CXX test/system/st_rados_delete_objs.o CXX test/system/st_rados_watch.o CXX test/system/st_rados_notify.o CXX test/bench_log.o CXX test/ceph_test_cors-test_cors.o CXX test/ceph_test_cls_rgw_meta-test_rgw_admin_meta.o CXX test/ceph_test_cls_rgw_log-test_rgw_admin_log.o CXX test/ceph_test_cls_rgw_opstate-test_rgw_admin_opstate.o CXX test/multi_stress_watch.o CXX common/dummy.o CXX test/librados/ceph_test_rados_api_cmd-cmd.o CXX test/librados/ceph_test_rados_api_io-io.o CXX test/librados/ceph_test_rados_api_c_write_operations-c_write_operations.o CXX test/librados/ceph_test_rados_api_c_read_operations-c_read_operations.o CXX test/librados/ceph_test_rados_api_aio-aio.o CXX test/librados/ceph_test_rados_api_list-list.o CXX test/librados/ceph_test_rados_api_pool-pool.o CXX test/librados/ceph_test_rados_api_stat-stat.o CXX test/librados/ceph_test_rados_api_watch_notify-watch_notify.o CXX test/librados/ceph_test_rados_api_snapshots-snapshots.o CXX test/librados/ceph_test_rados_api_cls-cls.o CXX test/librados/ceph_test_rados_api_misc-misc.o CXX test/librados/ceph_test_rados_api_tier-tier.o CXX osd/ceph_test_rados_api_tier-HitSet.o CXX test/librados/ceph_test_rados_api_lock-lock.o CXX test/libradosstriper/ceph_test_rados_striper_api_io-io.o CXX test/libradosstriper/ceph_test_rados_striper_api_aio-aio.o CXX test/libradosstriper/ceph_test_rados_striper_api_striping-striping.o CXX test/objectstore/workload_generator.o CXX test/objectstore/TestObjectStoreState.o CXX test/objectstore/test_idempotent.o CXX test/objectstore/FileStoreTracker.o CXX test/objectstore/test_idempotent_sequence.o CXX test/objectstore/DeterministicOpSequence.o CXX test/objectstore/FileStoreDiff.o CXX test/ceph_xattr_bench-xattr_bench.o CXX test/ceph_test_filejournal-test_filejournal.o CXX test/ceph_test_stress_watch-test_stress_watch.o CXX test/ceph_test_snap_mapper-test_snap_mapper.o CXX test/test_cfuse_cache_invalidate.o CC test/test_c_headers.o CXX test/test_get_blkdev_size.o CXX rgw/rgw_resolve.o CXX rgw/rgw_rest.o CXX rgw/rgw_rest_swift.o CXX rgw/rgw_rest_s3.o CXX rgw/rgw_rest_usage.o CXX rgw/rgw_rest_user.o CXX rgw/rgw_rest_bucket.o CXX rgw/rgw_rest_metadata.o CXX rgw/rgw_replica_log.o CXX rgw/rgw_rest_log.o CXX rgw/rgw_rest_opstate.o CXX rgw/rgw_rest_replica_log.o CXX rgw/rgw_rest_config.o CXX rgw/rgw_http_client.o CXX rgw/rgw_swift.o CXX rgw/rgw_swift_auth.o CXX rgw/rgw_loadgen.o CXX rgw/rgw_civetweb.o CXX rgw/rgw_civetweb_log.o CXX rgw/rgw_main.o CXX rgw/rgw_admin.o CXX rbd_replay/rbd-replay.o CXX rgw/ceph_dencoder-rgw_dencoder.o CXX rgw/ceph_dencoder-rgw_acl.o CXX rgw/ceph_dencoder-rgw_common.o CXX rgw/ceph_dencoder-rgw_env.o CXX rgw/ceph_dencoder-rgw_json_enc.o CXX tools/ceph_objectstore_tool.o CXX tools/monmaptool.o CXX tools/crushtool.o CXX tools/osdmaptool.o CXX common/obj_bencher.o CXX tools/ceph_conf.o CXX tools/ceph_authtool.o CXX tools/mon_store_converter.o CXX ceph_mon.o CXX ceph_osd.o CXX ceph_mds.o CXX cephfs.o CXX librados-config.o CXX ceph_syn.o CXX client/SyntheticClient.o CXX rbd.o CXX ceph_fuse.o CXX test/common/get_command_descriptions.o rm -f ceph-debugpack ceph-debugpack.tmp sed -e 's|@bindir[@]|/usr/local/bin|g' -e 's|@sbindir[@]|/usr/local/sbin|g' -e 's|@libdir[@]|/usr/local/lib|g' -e 's|@sysconfdir[@]|/usr/local/etc|g' -e 's|@datadir[@]|/usr/local/share/ceph|g' -e 's|@prefix[@]|/usr/local|g' -e 's|@@GCOV_PREFIX_STRIP[@][@]|5|g' './ceph-debugpack.in' >ceph-debugpack.tmp chmod +x ceph-debugpack.tmp chmod a-w ceph-debugpack.tmp mv ceph-debugpack.tmp ceph-debugpack rm -f ceph-post-file ceph-post-file.tmp sed -e 's|@bindir[@]|/usr/local/bin|g' -e 's|@sbindir[@]|/usr/local/sbin|g' -e 's|@libdir[@]|/usr/local/lib|g' -e 's|@sysconfdir[@]|/usr/local/etc|g' -e 's|@datadir[@]|/usr/local/share/ceph|g' -e 's|@prefix[@]|/usr/local|g' -e 's|@@GCOV_PREFIX_STRIP[@][@]|5|g' './ceph-post-file.in' >ceph-post-file.tmp rm -f ceph-crush-location ceph-crush-location.tmp sed -e 's|@bindir[@]|/usr/local/bin|g' -e 's|@sbindir[@]|/usr/local/sbin|g' -e 's|@libdir[@]|/usr/local/lib|g' -e 's|@sysconfdir[@]|/usr/local/etc|g' -e 's|@datadir[@]|/usr/local/share/ceph|g' -e 's|@prefix[@]|/usr/local|g' -e 's|@@GCOV_PREFIX_STRIP[@][@]|5|g' './ceph-crush-location.in' >ceph-crush-location.tmp chmod +x ceph-post-file.tmp chmod a-w ceph-post-file.tmp mv ceph-post-file.tmp ceph-post-file rm -f ceph-coverage ceph-coverage.tmp sed -e 's|@bindir[@]|/usr/local/bin|g' -e 's|@sbindir[@]|/usr/local/sbin|g' -e 's|@libdir[@]|/usr/local/lib|g' -e 's|@sysconfdir[@]|/usr/local/etc|g' -e 's|@datadir[@]|/usr/local/share/ceph|g' -e 's|@prefix[@]|/usr/local|g' -e 's|@@GCOV_PREFIX_STRIP[@][@]|5|g' './ceph-coverage.in' >ceph-coverage.tmp cp -f ./fetch_config ./sample.fetch_config chmod +x ceph-crush-location.tmp chmod +x ceph-coverage.tmp chmod a-w ceph-crush-location.tmp chmod a-w ceph-coverage.tmp mv ceph-crush-location.tmp ceph-crush-location mv ceph-coverage.tmp ceph-coverage CXX cls/version/cls_version_client.o CXX cls/version/cls_version_types.o CXX cls/log/cls_log_client.o CXX cls/statelog/cls_statelog_client.o CXX cls/replica_log/cls_replica_log_types.o CXX cls/replica_log/cls_replica_log_ops.o CXX cls/replica_log/cls_replica_log_client.o CXX cls/user/cls_user_client.o CXX cls/user/cls_user_types.o CXX cls/user/cls_user_ops.o CXX erasure-code/libec_jerasure_generic_la-ErasureCode.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_generic_la-cauchy.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_generic_la-galois.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_generic_la-jerasure.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_generic_la-liberation.lo CC erasure-code/jerasure/jerasure/src/libec_jerasure_generic_la-reed_sol.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_wgen.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_method.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_w16.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_w32.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_w64.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_w128.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_general.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_w4.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_rand.lo CC erasure-code/jerasure/gf-complete/src/libec_jerasure_generic_la-gf_w8.lo if [ -n "$NO_VERSION" ] ; then \ if [ ! -f ./ceph_ver.h ] ; then \ ./make_version -n ./ceph_ver.h ; \ fi; \ else \ ./make_version ./.git_version ./ceph_ver.h ; \ fi $1: ./.git_version CXX erasure-code/jerasure/libec_jerasure_generic_la-ErasureCodeJerasure.lo CC crush/builder.lo CC crush/mapper.lo CC crush/crush.lo CC crush/hash.lo CXX crush/CrushWrapper.lo CXX crush/CrushCompiler.lo CXX crush/CrushTester.lo CXX erasure-code/jerasure/libec_jerasure_la-ErasureCodePluginSelectJerasure.lo CXX erasure-code/lrc/libec_lrc_la-ErasureCodePluginLrc.lo CXX erasure-code/lrc/libec_lrc_la-ErasureCodeLrc.lo CXX common/libec_lrc_la-str_map.lo CXX json_spirit/json_spirit_reader.lo CXX json_spirit/json_spirit_writer.lo CXX test/erasure-code/libec_example_la-ErasureCodePluginExample.lo CXX test/erasure-code/libec_missing_entry_point_la-ErasureCodePluginMissingEntryPoint.lo CXX test/erasure-code/libec_missing_version_la-ErasureCodePluginMissingVersion.lo CXX test/erasure-code/libec_hangs_la-ErasureCodePluginHangs.lo CXX test/erasure-code/libec_fail_to_initialize_la-ErasureCodePluginFailToInitialize.lo CXX test/erasure-code/libec_fail_to_register_la-ErasureCodePluginFailToRegister.lo CXX test/erasure-code/libec_test_jerasure_sse4_la-TestJerasurePluginSSE4.lo CXX test/erasure-code/libec_test_jerasure_sse3_la-TestJerasurePluginSSE3.lo CXX test/erasure-code/libec_test_jerasure_generic_la-TestJerasurePluginGeneric.lo CXX librados/librados_la-librados.lo CXX librados/librados_la-RadosClient.lo CXX librados/librados_la-IoCtxImpl.lo CXX librados/librados_la-snap_set_diff.lo CXX librados/librados_la-RadosXattrIter.lo CXX cls/lock/cls_lock_client.lo CXX cls/lock/cls_lock_types.lo CXX cls/lock/cls_lock_ops.lo CXX osdc/Objecter.lo CXX osdc/ObjectCacher.lo CXX osdc/Filer.lo CXX osdc/Striper.lo CXX osdc/Journaler.lo CC ceph_ver.lo CXX common/version.lo CXX mon/MonCap.lo CXX mon/MonClient.lo CXX mon/MonMap.lo CXX osd/OSDMap.lo CXX osd/osd_types.lo CXX osd/ECMsgTypes.lo CXX osd/HitSet.lo CXX mds/MDSMap.lo CXX mds/inode_backtrace.lo CXX mds/mdstypes.lo CXXLD libcommon_crc.la CXX erasure-code/ErasureCodePlugin.lo CXX msg/Message.lo CXX msg/Messenger.lo CXX msg/msg_types.lo CXX msg/simple/Accepter.lo CXX msg/simple/DispatchQueue.lo CXX msg/simple/Pipe.lo CXX msg/simple/PipeConnection.lo CXX msg/simple/SimpleMessenger.lo CXX auth/AuthAuthorizeHandler.lo CXX auth/AuthClientHandler.lo CXX auth/AuthSessionHandler.lo CXX auth/AuthServiceHandler.lo CXX auth/AuthMethodList.lo CXX auth/cephx/CephxAuthorizeHandler.lo CXX auth/cephx/CephxClientHandler.lo CXX auth/cephx/CephxProtocol.lo CXX auth/cephx/CephxServiceHandler.lo CXX auth/cephx/CephxSessionHandler.lo CXX auth/cephx/CephxKeyServer.lo CXX auth/none/AuthNoneAuthorizeHandler.lo CXX auth/unknown/AuthUnknownAuthorizeHandler.lo CXX log/Log.lo CXX log/SubsystemMap.lo CC arch/intel.lo CC arch/neon.lo CXX arch/probe.lo CXX libradosstriper/libradosstriper_la-libradosstriper.lo CXX libradosstriper/libradosstriper_la-RadosStriperImpl.lo CXX libradosstriper/libradosstriper_la-MultiAioCompletionImpl.lo CXX librbd/librbd.lo CXX librbd/AioCompletion.lo CXX librbd/AioRequest.lo CXX librbd/ImageCtx.lo CXX librbd/internal.lo CXX librbd/LibrbdWriteback.lo CXX librbd/WatchCtx.lo CXX cls/rbd/cls_rbd_client.lo CXX client/Client.lo CXX client/Inode.lo CXX client/Dentry.lo CXX client/MetaRequest.lo CXX client/ClientSnapRealm.lo CXX client/MetaSession.lo CXX client/Trace.lo CXX java/native/libcephfs_jni_la-libcephfs_jni.lo CXX java/native/libcephfs_jni_la-JniConstants.lo CXXLD libmon_types.la CXX os/libos_la-chain_xattr.lo CXX os/libos_la-DBObjectMap.lo CXX os/libos_la-GenericObjectMap.lo CXX os/libos_la-FileJournal.lo CXX os/libos_la-FileStore.lo CXX os/libos_la-FlatIndex.lo CXX os/libos_la-GenericFileStoreBackend.lo CXX os/libos_la-HashIndex.lo CXX os/libos_la-IndexManager.lo CXX os/libos_la-JournalingObjectStore.lo CXX os/libos_la-LevelDBStore.lo CXX os/libos_la-LFNIndex.lo CXX os/libos_la-MemStore.lo CXX os/libos_la-KeyValueDB.lo CXX os/libos_la-KeyValueStore.lo CXX os/libos_la-ObjectStore.lo CXX os/libos_la-WBThrottle.lo CXX os/libos_la-BtrfsFileStoreBackend.lo CXX os/libos_la-XfsFileStoreBackend.lo CXX os/libos_types_la-Transaction.lo CXXLD libosd_types.la CXX objclass/libosd_la-class_api.lo CXX global/global_context.lo CXX global/global_init.lo CXX global/pidfile.lo CXX global/signal_handler.lo CXX perfglue/heap_profiler.lo CXX perfglue/cpu_profiler.lo CCLD libsecret.la CXX rgw/librgw_la-librgw.lo CXX rgw/librgw_la-rgw_acl.lo CXX rgw/librgw_la-rgw_acl_s3.lo CXX rgw/librgw_la-rgw_acl_swift.lo CXX rgw/librgw_la-rgw_client_io.lo CXX rgw/librgw_la-rgw_fcgi.lo CXX rgw/librgw_la-rgw_xml.lo CXX rgw/librgw_la-rgw_usage.lo CXX rgw/librgw_la-rgw_json_enc.lo CXX rgw/librgw_la-rgw_user.lo CXX rgw/librgw_la-rgw_bucket.lo CXX rgw/librgw_la-rgw_tools.lo CXX rgw/librgw_la-rgw_rados.lo CXX rgw/librgw_la-rgw_http_client.lo CXX rgw/librgw_la-rgw_rest_client.lo CXX rgw/librgw_la-rgw_rest_conn.lo CXX rgw/librgw_la-rgw_op.lo CXX rgw/librgw_la-rgw_common.lo CXX rgw/librgw_la-rgw_cache.lo CXX rgw/librgw_la-rgw_formats.lo CXX rgw/librgw_la-rgw_log.lo CXX rgw/librgw_la-rgw_multi.lo CXX rgw/librgw_la-rgw_policy_s3.lo CXX rgw/librgw_la-rgw_gc.lo CXX rgw/librgw_la-rgw_multi_del.lo CXX rgw/librgw_la-rgw_env.lo CXX rgw/librgw_la-rgw_cors.lo CXX rgw/librgw_la-rgw_cors_s3.lo CXX rgw/librgw_la-rgw_auth_s3.lo CXX rgw/librgw_la-rgw_metadata.lo CXX rgw/librgw_la-rgw_replica_log.lo CXX rgw/librgw_la-rgw_keystone.lo CXX rgw/librgw_la-rgw_quota.lo CXX rgw/librgw_la-rgw_dencoder.lo CXX cls/refcount/cls_refcount_client.lo CXX cls/refcount/cls_refcount_ops.lo CXX cls/rgw/cls_rgw_client.lo CXX cls/rgw/cls_rgw_types.lo CXX cls/rgw/cls_rgw_ops.lo CXX rbd_replay/actions.lo CXX rbd_replay/Deser.lo CXX rbd_replay/ImageNameMap.lo CXX rbd_replay/PendingIO.lo CXX rbd_replay/rbd_loc.lo CXX rbd_replay/Replayer.lo CXX rbd_replay/Ser.lo CXX rbd_replay/ios.lo CXX test/system/cross_process_sem.lo CXX test/system/systest_runnable.lo CXX test/system/systest_settings.lo CXX test/librados/libradostest_la-test.lo CXX test/librados/libradostest_la-TestCase.lo CXX test/libradosstriper/libradosstripertest_la-TestCase.lo CXXLD libkrbd.la CXX cls/hello/cls_hello.lo CXXLD libcls_rbd.la CXXLD libcls_lock.la CXXLD libcls_version.la CXXLD libcls_log.la CXXLD libcls_statelog.la CXXLD libcls_replica_log.la CXXLD libcls_user.la CXX key_value_store/cls_kvs.lo CCLD ceph_test_ioctls CXX test/TestTimers.o CXX test/TestSignalHandlers.o CXX test/osd/TestRados.o CXX test/osd/TestOpStat.o CXX test/osd/Object.o CXX test/osd/RadosModel.o CXX test/bench/small_io_bench.o CXX test/bench/rados_backend.o CXX test/bench/detailed_stat_collector.o CXX test/bench/bencher.o CXX test/bench/small_io_bench_fs.o CXX test/bench/testfilestore_backend.o CXX test/bench/small_io_bench_dumb.o CXX test/bench/dumb_backend.o CXX test/bench/small_io_bench_rbd.o CXX test/bench/rbd_backend.o CXX test/bench/tp_bench.o CXX test/rgw/ceph_test_rgw_manifest-test_rgw_manifest.o CXX test/librbd/ceph_test_librbd-test_librbd.o CC test/librbd/ceph_test_librbd_fsx-fsx.o CXX test/cls_rbd/ceph_test_cls_rbd-test_cls_rbd.o CXX test/cls_refcount/ceph_test_cls_refcount-test_cls_refcount.o CXX test/cls_version/ceph_test_cls_version-test_cls_version.o CXX test/cls_log/ceph_test_cls_log-test_cls_log.o CXX test/cls_statelog/ceph_test_cls_statelog-test_cls_statelog.o CXX test/cls_replica_log/ceph_test_cls_replica_log-test_cls_replica_log.o CXX test/cls_lock/ceph_test_cls_lock-test_cls_lock.o CXX test/cls_hello/ceph_test_cls_hello-test_cls_hello.o CXX test/cls_rgw/ceph_test_cls_rgw-test_cls_rgw.o CXX test/mon/test_mon_workloadgen.o CXX test/mon/ceph_test_mon_msg-test-mon-msg.o CXX test/libcephfs/ceph_test_libcephfs-test.o CXX test/libcephfs/ceph_test_libcephfs-readdir_r_cb.o CXX test/libcephfs/ceph_test_libcephfs-caps.o CXX test/libcephfs/ceph_test_libcephfs-multiclient.o CXX test/objectstore/ceph_test_objectstore-store_test.o CXX test/filestore/ceph_test_filestore-TestFileStore.o CXX test/common/ObjectContents.o CXX test/osdc/object_cacher_stress.o CXX test/osdc/FakeWriteback.o CXX test/ObjectMap/ceph_test_object_map-test_object_map.o CXX test/ObjectMap/ceph_test_object_map-KeyValueDBMemory.o CXX test/ObjectMap/ceph_test_keyvaluedb_atomicity-test_keyvaluedb_atomicity.o CXX test/ObjectMap/ceph_test_keyvaluedb_iterators-test_keyvaluedb_iterators.o CXX test/ObjectMap/ceph_test_keyvaluedb_iterators-KeyValueDBMemory.o CXXLD ceph_test_cfuse_cache_invalidate CXX tools/ceph_osdomap_tool.o CXX tools/ceph_monstore_tool.o CXX tools/ceph_kvstore_tool-ceph_kvstore_tool.o CC tools/scratchtool.o CXX tools/scratchtoolpp.o CXX tools/psim.o CXX tools/dupstore.o CXX tools/radosacl.o CXX tools/ceph-client-debug.o CC civetweb/src/radosgw-civetweb.o CXX test/encoding/ceph_dencoder-ceph_dencoder.o CXX tools/rados/rados.o CXX tools/rados/rados_import.o CXX tools/rados/rados_export.o CXX tools/rados/rados_sync.o CXX tools/cephfs/cephfs-journal-tool.o CXX tools/cephfs/JournalTool.o CXX tools/cephfs/JournalFilter.o CXX tools/cephfs/JournalScanner.o CXX tools/cephfs/EventOutput.o CXX tools/cephfs/Dumper.o CXX tools/cephfs/Resetter.o CXX tools/cephfs/MDSUtility.o CC rbd_fuse/rbd-fuse.o CC mount/mount.ceph.o rm -f ceph ceph.tmp echo "#!/usr/bin/env python" >ceph.tmp grep "#define CEPH_GIT_NICE_VER" ./ceph_ver.h | \ sed -e 's/#define \(.*VER\) /\1=/' >>ceph.tmp grep "#define CEPH_GIT_VER" ./ceph_ver.h | \ sed -e 's/#define \(.*VER\) /\1=/' -e 's/=\(.*\)$/="\1"/' >>ceph.tmp cat ./ceph.in >>ceph.tmp chmod a+x ceph.tmp chmod a-w ceph.tmp mv ceph.tmp ceph AR libcls_version_client.a AR libcls_log_client.a AR libcls_statelog_client.a AR libcls_replica_log_client.a AR libcls_user_client.a CXX erasure-code/jerasure/libec_jerasure_generic_la-ErasureCodePluginJerasure.lo CXXLD libcrush.la CXX erasure-code/jerasure/libec_jerasure_sse3_la-ErasureCodePluginJerasure.lo CXX erasure-code/jerasure/libec_jerasure_sse4_la-ErasureCodePluginJerasure.lo CXXLD libjson_spirit.la CXXLD libec_missing_entry_point.la CXXLD libec_missing_version.la CXXLD libec_hangs.la CXXLD libec_fail_to_initialize.la CXXLD libec_fail_to_register.la CXXLD libec_test_jerasure_sse4.la CXXLD libec_test_jerasure_sse3.la CXXLD libec_test_jerasure_generic.la CXXLD libcls_lock_client.la CXXLD libosdc.la CXXLD libauth.la CXXLD liblog.la CXXLD libarch.la CXXLD libcls_rbd_client.la CXXLD libos_types.la CXXLD libperfglue.la CXXLD libcls_refcount_client.la CXXLD libcls_rgw_client.la CXXLD libradostest.la CXXLD libcls_hello.la CXXLD libcls_refcount.la CXXLD libcls_rgw.la CXXLD libcls_kvs.la CXXLD libec_jerasure.la CXXLD libec_lrc.la CXXLD libec_example.la CXXLD libos.la CXXLD libradosstripertest.la copying selected object files to avoid basename conflicts... CXXLD libec_jerasure_generic.la CXXLD libec_jerasure_sse4.la CXXLD libec_jerasure_sse3.la CXXLD liberasure_code.la CXXLD librgw.la CXXLD libmsg.la CXXLD libclient.la CXXLD libclient_fuse.la CXXLD libcommon.la CXXLD librados.la CXXLD libcephfs.la CXXLD libmon.la CXXLD libglobal.la CXXLD ceph_test_rewrite_latency CXXLD ceph_test_get_blkdev_size CXXLD cephfs CCLD mount.ceph copying selected object files to avoid basename conflicts... CXXLD libosd.la copying selected object files to avoid basename conflicts... CXXLD libsystest.la CXXLD ceph_test_signal_handlers CXXLD ceph_test_timers CXXLD ceph_test_msgr CXXLD ceph_streamtest CXXLD ceph_test_trans CXXLD ceph_test_crypto CXXLD ceph_bench_log CXXLD ceph_test_mon_workloadgen CXXLD ceph_test_mon_msg CXXLD ceph_test_objectstore CXXLD ceph_test_filestore CXXLD ceph_test_objectstore_workloadgen CXXLD ceph_test_filestore_idempotent CXXLD ceph_test_filestore_idempotent_sequence CXXLD ceph_xattr_bench CXXLD ceph_test_filejournal CXXLD ceph_test_objectcacher_stress CXXLD ceph_test_object_map CXXLD ceph_test_keyvaluedb_atomicity CXXLD ceph_test_keyvaluedb_iterators CXXLD ceph-osdomap-tool CXXLD ceph-monstore-tool CXXLD ceph-kvstore-tool CXXLD ceph_psim CXXLD ceph_dupstore CXXLD monmaptool CXXLD crushtool CXXLD osdmaptool CXXLD ceph-conf CXXLD ceph-authtool CXXLD ceph-syn CXXLD ceph-fuse copying selected object files to avoid basename conflicts... CXXLD ceph_test_keys CXXLD ceph-mon CXXLD ceph_mon_store_converter CXXLD get_command_descriptions CXXLD ceph_erasure_code CXXLD ceph_erasure_code_benchmark CXXLD ceph_test_snap_mapper CXXLD ceph-osd CXXLD libmds.la CXXLD libradosstriper.la CXXLD librbd.la CXXLD ceph_rgw_multiparser CXXLD ceph_rgw_jsonparser CXXLD ceph_test_rados CXXLD ceph_test_mutate CXXLD ceph_smalliobench CXXLD ceph_smalliobenchfs CXXLD ceph_smalliobenchdumb CXXLD ceph_tpbench CXXLD ceph_omapbench CXXLD ceph_kvstorebench CXXLD ceph_test_rados_list_parallel CXXLD ceph_test_rados_open_pools_parallel CXXLD ceph_test_rados_watch_notify CXXLD ceph_test_rados_delete_pools_parallel CXXLD ceph_test_cors CXXLD ceph_test_rgw_manifest CXXLD ceph_test_cls_rgw_meta CXXLD ceph_test_cls_rgw_log CXXLD ceph_test_cls_rgw_opstate CXXLD ceph_multi_stress_watch CXXLD ceph_test_cls_refcount CXXLD ceph_test_cls_rbd CXXLD ceph_test_cls_version CXXLD ceph_test_cls_log CXXLD ceph_test_cls_statelog CXXLD ceph_test_cls_replica_log CXXLD ceph_test_cls_lock CXXLD ceph_test_cls_hello CXXLD ceph_test_cls_rgw CXXLD ceph_test_rados_api_cmd CXXLD ceph_test_rados_api_io CXXLD ceph_test_rados_api_c_write_operations CXXLD ceph_test_rados_api_c_read_operations CXXLD ceph_test_rados_api_aio CXXLD ceph_test_rados_api_list CXXLD ceph_test_rados_api_pool CXXLD ceph_test_rados_api_stat CXXLD ceph_test_rados_api_watch_notify CXXLD ceph_test_rados_api_snapshots CXXLD ceph_test_rados_api_cls CXXLD ceph_test_rados_api_misc CXXLD ceph_test_rados_api_tier CXXLD ceph_test_rados_api_lock CXXLD ceph_test_stress_watch CCLD ceph_scratchtool CXXLD ceph_scratchtoolpp CXXLD ceph_radosacl CXXLD radosgw CXXLD radosgw-admin CXXLD rados CXXLD ceph_objectstore_tool CXXLD librados-config CXXLD ceph_test_rados_striper_api_io CXXLD ceph_test_rados_striper_api_aio CXXLD ceph_test_rados_striper_api_striping CXXLD libcephfs_jni.la CCLD ceph_test_c_headers CXXLD ceph_test_libcephfs CXXLD ceph-client-debug CXXLD librbd_replay.la CXXLD ceph_test_librbd CXXLD ceph_test_librbd_fsx CXXLD ceph_smalliobenchrbd CXXLD rbd CCLD rbd-fuse CXXLD cephfs-journal-tool CXXLD ceph-mds copying selected object files to avoid basename conflicts... CXXLD librbd_replay_ios.la CXXLD rbd-replay copying selected object files to avoid basename conflicts... CXXLD ceph-dencoder make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make[2]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make[1]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' Making all in man make[1]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/man' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/man' + [ -e src/gtest ] + ../maxtime 1800 ionice -c3 nice -n20 make check CC=ccache gcc CXX=ccache g++ Making check in . make[1]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build' make[2]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/gtest' make[2]: `lib/libgtest.a' is up to date. make[2]: `lib/libgtest_main.a' is up to date. make[2]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/gtest' make check-local make[2]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build' make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/gtest' make[3]: `lib/libgtest.a' is up to date. make[3]: `lib/libgtest_main.a' is up to date. make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/gtest' ./src/test/run-cli-tests './src/test' New python executable in ./src/test/virtualenv/bin/python Installing distribute.............................................................................................................................................................................................done. Installing pip...............done. Unpacking ./src/test/downloads/cram-0.5.0ceph.2011-01-14.tar.gz Running setup.py egg_info for package from file:///srv/autobuild-ceph/gitbuilder.git/build/src/test/downloads/cram-0.5.0ceph.2011-01-14.tar.gz Installing collected packages: cram Running setup.py install for cram changing mode of build/scripts-2.7/cram from 664 to 775 changing mode of /srv/autobuild-ceph/gitbuilder.git/build/src/test/virtualenv/bin/cram to 775 Successfully installed cram Cleaning up... src/test/cli/ceph-authtool/add-key-segv.t: passed src/test/cli/ceph-authtool/add-key.t: passed src/test/cli/ceph-authtool/cap-bin.t: passed src/test/cli/ceph-authtool/cap-invalid.t: passed src/test/cli/ceph-authtool/cap-overwrite.t: passed src/test/cli/ceph-authtool/cap.t: passed src/test/cli/ceph-authtool/create-gen-list-bin.t: passed src/test/cli/ceph-authtool/create-gen-list.t: passed src/test/cli/ceph-authtool/help.t: passed src/test/cli/ceph-authtool/list-empty-bin.t: passed src/test/cli/ceph-authtool/list-empty.t: passed src/test/cli/ceph-authtool/list-nonexistent-bin.t: passed src/test/cli/ceph-authtool/list-nonexistent.t: passed src/test/cli/ceph-authtool/manpage.t: passed src/test/cli/ceph-authtool/simple.t: passed # Ran 15 tests, 0 skipped, 0 failed. src/test/cli/ceph-conf/env-vs-args.t: passed src/test/cli/ceph-conf/help.t: passed src/test/cli/ceph-conf/invalid-args.t: passed src/test/cli/ceph-conf/manpage.t: passed src/test/cli/ceph-conf/option.t: passed src/test/cli/ceph-conf/sections.t: passed src/test/cli/ceph-conf/show-config-value.t: passed src/test/cli/ceph-conf/show-config.t: passed src/test/cli/ceph-conf/simple.t: passed # Ran 9 tests, 0 skipped, 0 failed. src/test/cli/crushtool/add-item.t: passed src/test/cli/crushtool/bad-mappings.t: passed src/test/cli/crushtool/build.t: passed src/test/cli/crushtool/compile-decompile-recompile.t: passed src/test/cli/crushtool/help.t: passed src/test/cli/crushtool/location.t: passed src/test/cli/crushtool/output-csv.t: passed src/test/cli/crushtool/reweight.t: passed src/test/cli/crushtool/reweight_multiple.t: passed src/test/cli/crushtool/set-choose.t: passed src/test/cli/crushtool/test-map-bobtail-tunables.t: passed src/test/cli/crushtool/test-map-firefly-tunables.t: passed src/test/cli/crushtool/test-map-firstn-indep.t: passed src/test/cli/crushtool/test-map-indep.t: passed src/test/cli/crushtool/test-map-legacy-tunables.t: passed src/test/cli/crushtool/test-map-tries-vs-retries.t: passed src/test/cli/crushtool/test-map-vary-r-0.t: passed src/test/cli/crushtool/test-map-vary-r-1.t: passed src/test/cli/crushtool/test-map-vary-r-2.t: passed src/test/cli/crushtool/test-map-vary-r-3.t: passed src/test/cli/crushtool/test-map-vary-r-4.t: passed # Ran 21 tests, 0 skipped, 0 failed. src/test/cli/monmaptool/add-exists.t: passed src/test/cli/monmaptool/add-many.t: passed src/test/cli/monmaptool/clobber.t: passed src/test/cli/monmaptool/create-print.t: passed src/test/cli/monmaptool/create-with-add.t: passed src/test/cli/monmaptool/help.t: passed src/test/cli/monmaptool/print-empty.t: passed src/test/cli/monmaptool/print-nonexistent.t: passed src/test/cli/monmaptool/rm-nonexistent.t: passed src/test/cli/monmaptool/rm.t: passed src/test/cli/monmaptool/simple.t: passed # Ran 11 tests, 0 skipped, 0 failed. src/test/cli/osdmaptool/clobber.t: passed src/test/cli/osdmaptool/create-print.t: passed src/test/cli/osdmaptool/create-racks.t: passed src/test/cli/osdmaptool/crush.t: passed src/test/cli/osdmaptool/help.t: passed src/test/cli/osdmaptool/missing-argument.t: passed src/test/cli/osdmaptool/pool.t: passed src/test/cli/osdmaptool/print-empty.t: passed src/test/cli/osdmaptool/print-nonexistent.t: passed src/test/cli/osdmaptool/test-map-pgs.t: passed # Ran 10 tests, 0 skipped, 0 failed. src/test/cli/radosgw-admin/help.t: passed # Ran 1 tests, 0 skipped, 0 failed. src/test/cli/rbd/help.t: passed src/test/cli/rbd/invalid-snap-usage.t: passed src/test/cli/rbd/not-enough-args.t: passed # Ran 3 tests, 0 skipped, 0 failed. make[2]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build' make[1]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build' Making check in src make[1]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make check-recursive make[2]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' Making check in ocf make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/ocf' make[3]: Nothing to be done for `check'. make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/ocf' Making check in java make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' make check-am make[4]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' make[4]: Nothing to be done for `check-am'. make[4]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/java' Making check in tracing make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make check-am make[4]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make[4]: Nothing to be done for `check-am'. make[4]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src/tracing' make[3]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' ./check_version ./.git_version ./.git_version is up to date. make unittest_erasure_code_plugin unittest_erasure_code_jerasure unittest_erasure_code_plugin_jerasure unittest_erasure_code_lrc unittest_erasure_code_plugin_lrc unittest_erasure_code_example unittest_encoding unittest_addrs unittest_bloom_filter unittest_histogram unittest_str_map unittest_sharedptr_registry unittest_shared_cache unittest_sloppy_crc_map unittest_util unittest_crush_indep unittest_osdmap unittest_workqueue unittest_striper unittest_prebufferedstreambuf unittest_str_list unittest_log unittest_throttle unittest_crush_wrapper unittest_base64 unittest_ceph_argparse unittest_ceph_compatset unittest_osd_types unittest_pglog unittest_ecbackend unittest_hitset unittest_lru unittest_io_priority unittest_gather unittest_run_cmd unittest_signals unittest_simple_spin unittest_librados unittest_bufferlist unittest_crc32c unittest_arch unittest_crypto unittest_crypto_init unittest_perf_counters unittest_admin_socket unittest_ceph_crypto unittest_utf8 unittest_mime unittest_escape unittest_chain_xattr unittest_flatindex unittest_strtol unittest_confutils unittest_config unittest_context unittest_heartbeatmap unittest_formatter unittest_libcephfs_config unittest_lfnindex unittest_librados_config unittest_daemon_config unittest_osd_osdcap unittest_mds_authcap unittest_mon_moncap unittest_mon_pgmap unittest_ipaddr unittest_texttable unittest_on_exit unittest_rbd_replay test/erasure-code/test-erasure-code.sh unittest_bufferlist.sh test/encoding/check-generated.sh test/mon/osd-pool-create.sh test/mon/misc.sh test/mon/osd-crush.sh test/mon/osd-erasure-code-profile.sh test/mon/mkfs.sh test/osd/osd-config.sh test/osd/osd-bench.sh test/ceph-disk.sh test/mon/mon-handle-forward.sh test/vstart_wrapped_tests.sh test/pybind/test_ceph_argparse.py make[4]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' CXX erasure-code/unittest_erasure_code_plugin-ErasureCode.o CXX test/erasure-code/unittest_erasure_code_plugin-TestErasureCodePlugin.o ./check_version ./.git_version ./.git_version is up to date. CXXLD unittest_erasure_code_plugin CXX test/erasure-code/unittest_erasure_code_jerasure-TestErasureCodeJerasure.o CXX erasure-code/unittest_erasure_code_jerasure-ErasureCode.o CC erasure-code/jerasure/jerasure/src/unittest_erasure_code_jerasure-cauchy.o CC erasure-code/jerasure/jerasure/src/unittest_erasure_code_jerasure-galois.o CC erasure-code/jerasure/jerasure/src/unittest_erasure_code_jerasure-jerasure.o CC erasure-code/jerasure/jerasure/src/unittest_erasure_code_jerasure-liberation.o CC erasure-code/jerasure/jerasure/src/unittest_erasure_code_jerasure-reed_sol.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_wgen.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_method.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_w16.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_w32.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_w64.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_w128.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_general.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_w4.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_rand.o CC erasure-code/jerasure/gf-complete/src/unittest_erasure_code_jerasure-gf_w8.o CXX erasure-code/jerasure/unittest_erasure_code_jerasure-ErasureCodePluginJerasure.o CXX erasure-code/jerasure/unittest_erasure_code_jerasure-ErasureCodeJerasure.o CXXLD unittest_erasure_code_jerasure CXX test/erasure-code/unittest_erasure_code_plugin_jerasure-TestErasureCodePluginJerasure.o CXXLD unittest_erasure_code_plugin_jerasure CXX test/erasure-code/unittest_erasure_code_lrc-TestErasureCodeLrc.o CXX erasure-code/unittest_erasure_code_lrc-ErasureCode.o CXX erasure-code/lrc/unittest_erasure_code_lrc-ErasureCodePluginLrc.o CXX erasure-code/lrc/unittest_erasure_code_lrc-ErasureCodeLrc.o CXXLD unittest_erasure_code_lrc CXX test/erasure-code/unittest_erasure_code_plugin_lrc-TestErasureCodePluginLrc.o CXXLD unittest_erasure_code_plugin_lrc CXX erasure-code/unittest_erasure_code_example-ErasureCode.o CXX test/erasure-code/unittest_erasure_code_example-TestErasureCodeExample.o CXXLD unittest_erasure_code_example CXX test/unittest_encoding-encoding.o CXXLD unittest_encoding CXX test/unittest_addrs-test_addrs.o CXXLD unittest_addrs CXX test/common/unittest_bloom_filter-test_bloom_filter.o CXXLD unittest_bloom_filter CXX test/common/unittest_histogram-histogram.o CXXLD unittest_histogram CXX test/common/unittest_str_map-test_str_map.o CXXLD unittest_str_map CXX test/common/unittest_sharedptr_registry-test_sharedptr_registry.o CXXLD unittest_sharedptr_registry CXX test/common/unittest_shared_cache-test_shared_cache.o CXXLD unittest_shared_cache CXX test/common/unittest_sloppy_crc_map-test_sloppy_crc_map.o CXXLD unittest_sloppy_crc_map CXX test/common/unittest_util-test_util.o CXXLD unittest_util CXX test/crush/unittest_crush_indep-indep.o CXXLD unittest_crush_indep CXX test/osd/unittest_osdmap-TestOSDMap.o CXXLD unittest_osdmap CXX test/unittest_workqueue-test_workqueue.o CXXLD unittest_workqueue CXX test/unittest_striper-test_striper.o CXXLD unittest_striper CXX test/unittest_prebufferedstreambuf-test_prebufferedstreambuf.o CXXLD unittest_prebufferedstreambuf CXX test/unittest_str_list-test_str_list.o CXXLD unittest_str_list CXX log/unittest_log-test.o CXXLD unittest_log CXX test/common/unittest_throttle-Throttle.o CXXLD unittest_throttle CXX test/crush/unittest_crush_wrapper-TestCrushWrapper.o CXXLD unittest_crush_wrapper CXX test/unittest_base64-base64.o CXXLD unittest_base64 CXX test/unittest_ceph_argparse-ceph_argparse.o CXXLD unittest_ceph_argparse CXX test/unittest_ceph_compatset-ceph_compatset.o CXXLD unittest_ceph_compatset CXX test/osd/unittest_osd_types-types.o CXXLD unittest_osd_types CXX test/osd/unittest_pglog-TestPGLog.o CXXLD unittest_pglog CXX test/osd/unittest_ecbackend-TestECBackend.o CXXLD unittest_ecbackend CXX test/osd/unittest_hitset-hitset.o CXXLD unittest_hitset CXX test/common/unittest_lru-test_lru.o CXXLD unittest_lru CXX test/common/unittest_io_priority-test_io_priority.o CXXLD unittest_io_priority CXX test/unittest_gather-gather.o CXXLD unittest_gather CXX test/unittest_run_cmd-run_cmd.o CXXLD unittest_run_cmd CXX test/unittest_signals-signals.o CXXLD unittest_signals CXX test/unittest_simple_spin-simple_spin.o CXXLD unittest_simple_spin CXX test/librados/unittest_librados-librados.o CXXLD unittest_librados CXX test/unittest_bufferlist-bufferlist.o CXXLD unittest_bufferlist CXX test/common/unittest_crc32c-test_crc32c.o CXXLD unittest_crc32c CXX test/unittest_arch-test_arch.o CXXLD unittest_arch CXX test/unittest_crypto-crypto.o CXXLD unittest_crypto CXX test/unittest_crypto_init-crypto_init.o CXXLD unittest_crypto_init CXX test/unittest_perf_counters-perf_counters.o CXXLD unittest_perf_counters CXX test/unittest_admin_socket-admin_socket.o CXXLD unittest_admin_socket CXX test/unittest_ceph_crypto-ceph_crypto.o CXXLD unittest_ceph_crypto CXX test/unittest_utf8-utf8.o CXXLD unittest_utf8 CXX test/unittest_mime-mime.o CXXLD unittest_mime CXX test/unittest_escape-escape.o CXXLD unittest_escape CXX test/objectstore/unittest_chain_xattr-chain_xattr.o CXXLD unittest_chain_xattr CXX test/os/unittest_flatindex-TestFlatIndex.o CXXLD unittest_flatindex CXX test/unittest_strtol-strtol.o CXXLD unittest_strtol CXX test/unittest_confutils-confutils.o CXXLD unittest_confutils CXX test/common/unittest_config-test_config.o CXXLD unittest_config CXX test/common/unittest_context-test_context.o CXXLD unittest_context CXX test/unittest_heartbeatmap-heartbeat_map.o CXXLD unittest_heartbeatmap CXX test/unittest_formatter-formatter.o CXX rgw/unittest_formatter-rgw_formats.o CXXLD unittest_formatter CXX test/unittest_libcephfs_config-libcephfs_config.o CXXLD unittest_libcephfs_config CXX test/os/unittest_lfnindex-TestLFNIndex.o CXXLD unittest_lfnindex CXX test/librados/unittest_librados_config-librados_config.o CXXLD unittest_librados_config CXX test/unittest_daemon_config-daemon_config.o CXXLD unittest_daemon_config CXX test/osd/unittest_osd_osdcap-osdcap.o CXXLD unittest_osd_osdcap CXX test/mds/unittest_mds_authcap-TestMDSAuthCaps.o CXXLD unittest_mds_authcap CXX test/mon/unittest_mon_moncap-moncap.o CXXLD unittest_mon_moncap CXX test/mon/unittest_mon_pgmap-PGMap.o CXXLD unittest_mon_pgmap CXX test/unittest_ipaddr-test_ipaddr.o CXXLD unittest_ipaddr CXX test/unittest_texttable-test_texttable.o CXXLD unittest_texttable CXX test/on_exit.o CXXLD unittest_on_exit CXX test/unittest_rbd_replay-test_rbd_replay.o CXXLD unittest_rbd_replay make[4]: Nothing to be done for `test/erasure-code/test-erasure-code.sh'. make[4]: Nothing to be done for `unittest_bufferlist.sh'. make[4]: Nothing to be done for `test/encoding/check-generated.sh'. make[4]: Nothing to be done for `test/mon/osd-pool-create.sh'. make[4]: Nothing to be done for `test/mon/misc.sh'. make[4]: Nothing to be done for `test/mon/osd-crush.sh'. make[4]: Nothing to be done for `test/mon/osd-erasure-code-profile.sh'. make[4]: Nothing to be done for `test/mon/mkfs.sh'. make[4]: Nothing to be done for `test/osd/osd-config.sh'. make[4]: Nothing to be done for `test/osd/osd-bench.sh'. make[4]: Nothing to be done for `test/ceph-disk.sh'. make[4]: Nothing to be done for `test/mon/mon-handle-forward.sh'. make[4]: Nothing to be done for `test/vstart_wrapped_tests.sh'. make[4]: Nothing to be done for `test/pybind/test_ceph_argparse.py'. make[4]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make check-TESTS check-local make[4]: Entering directory `/srv/autobuild-ceph/gitbuilder.git/build/src' ./check_version ./.git_version ./.git_version is up to date. 2014-10-08 11:12:33.623860 2b9e6425bf80 -1 did not load config file, using default settings. [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from ErasureCodePluginRegistryTest [ RUN ] ErasureCodePluginRegistryTest.factory_mutex Trying (1) with delay 0us Trying (1) with delay 2us [ OK ] ErasureCodePluginRegistryTest.factory_mutex (2 ms) [ RUN ] ErasureCodePluginRegistryTest.all [ OK ] ErasureCodePluginRegistryTest.all (6 ms) [----------] 2 tests from ErasureCodePluginRegistryTest (8 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (8 ms total) [ PASSED ] 2 tests. PASS: unittest_erasure_code_plugin 2014-10-08 11:12:33.650436 2abbd151bf80 -1 did not load config file, using default settings. [==========] Running 16 tests from 8 test cases. [----------] Global test environment set-up. [----------] 2 tests from ErasureCodeTest/0, where TypeParam = ErasureCodeJerasureReedSolomonVandermonde [ RUN ] ErasureCodeTest/0.encode_decode [ OK ] ErasureCodeTest/0.encode_decode (3 ms) [ RUN ] ErasureCodeTest/0.minimum_to_decode [ OK ] ErasureCodeTest/0.minimum_to_decode (0 ms) [----------] 2 tests from ErasureCodeTest/0 (3 ms total) [----------] 2 tests from ErasureCodeTest/1, where TypeParam = ErasureCodeJerasureReedSolomonRAID6 [ RUN ] ErasureCodeTest/1.encode_decode 2014-10-08 11:12:33.653647 2abbd151bf80 -1 ErasureCodeJerasure: ReedSolomonVandermonde: w=7 must be one of {8, 16, 32} : revert to DEFAULT_W [ OK ] ErasureCodeTest/1.encode_decode (0 ms) [ RUN ] ErasureCodeTest/1.minimum_to_decode [ OK ] ErasureCodeTest/1.minimum_to_decode (1 ms) [----------] 2 tests from ErasureCodeTest/1 (1 ms total) [----------] 2 tests from ErasureCodeTest/2, where TypeParam = ErasureCodeJerasureCauchyOrig [ RUN ] ErasureCodeTest/2.encode_decode [ OK ] ErasureCodeTest/2.encode_decode (0 ms) [ RUN ] ErasureCodeTest/2.minimum_to_decode [ OK ] ErasureCodeTest/2.minimum_to_decode (0 ms) [----------] 2 tests from ErasureCodeTest/2 (0 ms total) [----------] 2 tests from ErasureCodeTest/3, where TypeParam = ErasureCodeJerasureCauchyGood [ RUN ] ErasureCodeTest/3.encode_decode [ OK ] ErasureCodeTest/3.encode_decode (0 ms) [ RUN ] ErasureCodeTest/3.minimum_to_decode [ OK ] ErasureCodeTest/3.minimum_to_decode (0 ms) [----------] 2 tests from ErasureCodeTest/3 (0 ms total) [----------] 2 tests from ErasureCodeTest/4, where TypeParam = ErasureCodeJerasureLiberation [ RUN ] ErasureCodeTest/4.encode_decode [ OK ] ErasureCodeTest/4.encode_decode (0 ms) [ RUN ] ErasureCodeTest/4.minimum_to_decode [ OK ] ErasureCodeTest/4.minimum_to_decode (0 ms) [----------] 2 tests from ErasureCodeTest/4 (0 ms total) [----------] 2 tests from ErasureCodeTest/5, where TypeParam = ErasureCodeJerasureBlaumRoth [ RUN ] ErasureCodeTest/5.encode_decode [ OK ] ErasureCodeTest/5.encode_decode (1 ms) [ RUN ] ErasureCodeTest/5.minimum_to_decode [ OK ] ErasureCodeTest/5.minimum_to_decode (0 ms) [----------] 2 tests from ErasureCodeTest/5 (1 ms total) [----------] 2 tests from ErasureCodeTest/6, where TypeParam = ErasureCodeJerasureLiber8tion [ RUN ] ErasureCodeTest/6.encode_decode [ OK ] ErasureCodeTest/6.encode_decode (0 ms) [ RUN ] ErasureCodeTest/6.minimum_to_decode [ OK ] ErasureCodeTest/6.minimum_to_decode (0 ms) [----------] 2 tests from ErasureCodeTest/6 (0 ms total) [----------] 2 tests from ErasureCodeTest [ RUN ] ErasureCodeTest.encode [ OK ] ErasureCodeTest.encode (0 ms) [ RUN ] ErasureCodeTest.create_ruleset [ OK ] ErasureCodeTest.create_ruleset (0 ms) [----------] 2 tests from ErasureCodeTest (0 ms total) [----------] Global test environment tear-down [==========] 16 tests from 8 test cases ran. (5 ms total) [ PASSED ] 16 tests. 2014-10-08 11:12:33.654053 2abbd151bf80 -1 ErasureCodeJerasure: ReedSolomonRAID6: w=7 must be one of {8, 16, 32} : revert to 8 2014-10-08 11:12:33.654402 2abbd151bf80 -1 ErasureCodeJerasure: Cauchy: w=7 must be one of {8, 16, 32} : revert to 8 2014-10-08 11:12:33.654662 2abbd151bf80 -1 ErasureCodeJerasure: Cauchy: w=7 must be one of {8, 16, 32} : revert to 8 PASS: unittest_erasure_code_jerasure 2014-10-08 11:12:33.676493 2b2389439f80 -1 did not load config file, using default settings. [==========] Running 3 tests from 1 test case. [----------] Global test environment set-up. [----------] 3 tests from ErasureCodePlugin [ RUN ] ErasureCodePlugin.factory load: jerasure 2014-10-08 11:12:33.690743 2b2389439f80 -1 ErasureCodePluginJerasure: technique= is not a valid coding technique. Choose one of the following: reed_sol_van, reed_sol_r6_op, cauchy_orig, cauchy_good, liberation, blaum_roth, liber8tion 2014-10-08 11:12:33.690784 2b2389439f80 -1 ErasureCodePluginSelectJerasure: [ OK ] ErasureCodePlugin.factory (14 ms) [ RUN ] ErasureCodePlugin.select 2014-10-08 11:12:33.691400 2b2389439f80 -1 ErasureCodePluginSelectJerasure: erasure_code_init(test_jerasure_sse4,.libs): (444) Unknown error 444 2014-10-08 11:12:33.691510 2b2389439f80 -1 ErasureCodePluginSelectJerasure: erasure_code_init(test_jerasure_sse3,.libs): (333) Unknown error 333 [ OK ] ErasureCodePlugin.select (0 ms) [ RUN ] ErasureCodePlugin.sse 2014-10-08 11:12:33.691618 2b2389439f80 -1 ErasureCodePluginSelectJerasure: erasure_code_init(test_jerasure_generic,.libs): (111) Connection refused load: jerasure_generic load: jerasure_sse3 [ OK ] ErasureCodePlugin.sse (20 ms) [----------] 3 tests from ErasureCodePlugin (34 ms total) [----------] Global test environment tear-down [==========] 3 tests from 1 test case ran. (35 ms total) [ PASSED ] 3 tests. PASS: unittest_erasure_code_plugin_jerasure 2014-10-08 11:12:33.730680 2b9226dacf80 -1 did not load config file, using default settings. [==========] Running 12 tests from 2 test cases. [----------] Global test environment set-up. [----------] 11 tests from ErasureCodeLrc [ RUN ] ErasureCodeLrc.parse_ruleset ruleset-steps='0' must be a JSON array but is of type 4 instead failed to parse ruleset-steps='{' at line 1, column 2 : not an object element of the array [0] must be a JSON array but 0 at position 0 is of type 4 instead element 0 of the array [0] found in [[0]] must be a JSON string but is of type 4 instead element 1 of the array ["choose",0] found in [["choose", 0]] must be a JSON string but is of type 4 instead element 2 of the array ["choose","host",[]] found in [["choose", "host", []]] must be a JSON int but is of type 1 instead [ OK ] ErasureCodeLrc.parse_ruleset (1 ms) [ RUN ] ErasureCodeLrc.parse_kml All of k, m, l must be set or none of them in {k=4} The mapping parameter cannot be set when k, m, l are set in {k=4,l=3,m=2,mapping=SET} The layers parameter cannot be set when k, m, l are set in {k=4,l=3,layers=SET,m=2} The ruleset-steps parameter cannot be set when k, m, l are set in {k=4,l=3,m=2,ruleset-steps=SET} k + m must be a multiple of l in {k=4,l=7,m=2} k must be a multiple of (k + m) / l in {k=3,l=3,m=3} [ OK ] ErasureCodeLrc.parse_kml (1 ms) [ RUN ] ErasureCodeLrc.layers_description could not find 'layers' in {} layers='"not an array"' must be a JSON array but is of type 2 instead failed to parse layers='invalid json' at line 1, column 1 : not a value [ OK ] ErasureCodeLrc.layers_description (0 ms) [ RUN ] ErasureCodeLrc.layers_parse each element of the array [ 0 ] must be a JSON array but 0 at position 0 is of type 4 instead the first element of the entry 0 (first is zero) 0 in [ [ 0 ] ] is of type 4 instead of string the second element of the entry 0 (first is zero) 0 in [ [ "", 0 ] ] is of type 4 instead of string or object [ OK ] ErasureCodeLrc.layers_parse (1 ms) [ RUN ] ErasureCodeLrc.layers_sanity_checks the 'mapping' parameter is missing from {layers=[ ]}layers parameter has 0 which is less than the minimum of one. [ ] the first element of the array at position 0 (starting from zero) is the string 'AA?? found in the layers parameter [ [ "AA??", "" ], [ "AA", "" ], [ "AA", "" ], ]. It is expected to be 2 characters long but is 4 characters long instead [ OK ] ErasureCodeLrc.layers_sanity_checks (12 ms) [ RUN ] ErasureCodeLrc.layers_init [ OK ] ErasureCodeLrc.layers_init (0 ms) [ RUN ] ErasureCodeLrc.init [ OK ] ErasureCodeLrc.init (1 ms) [ RUN ] ErasureCodeLrc.init_kml [ OK ] ErasureCodeLrc.init_kml (0 ms) [ RUN ] ErasureCodeLrc.minimum_to_decode [ OK ] ErasureCodeLrc.minimum_to_decode (2 ms) [ RUN ] ErasureCodeLrc.encode_decode 2014-10-08 11:12:33.749396 2b9226dacf80 -1 ErasureCodeLrc: minimum_to_decode not enough chunks in 0,1,4,5,6 to read 8 [ OK ] ErasureCodeLrc.encode_decode (1 ms) [ RUN ] ErasureCodeLrc.encode_decode_2 [ OK ] ErasureCodeLrc.encode_decode_2 (1 ms) [----------] 11 tests from ErasureCodeLrc (20 ms total) [----------] 1 test from ErasureCodeTest [ RUN ] ErasureCodeTest.create_ruleset [ OK ] ErasureCodeTest.create_ruleset (18 ms) [----------] 1 test from ErasureCodeTest (19 ms total) [----------] Global test environment tear-down [==========] 12 tests from 2 test cases ran. (39 ms total) [ PASSED ] 12 tests. PASS: unittest_erasure_code_lrc 2014-10-08 11:12:33.784276 2b3753e4cf80 -1 did not load config file, using default settings. [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from ErasureCodePlugin [ RUN ] ErasureCodePlugin.factory load: lrc [ OK ] ErasureCodePlugin.factory (30 ms) [----------] 1 test from ErasureCodePlugin (30 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (30 ms total) [ PASSED ] 1 test. PASS: unittest_erasure_code_plugin_lrc 2014-10-08 11:12:33.832594 2b2f52f04a40 -1 did not load config file, using default settings. [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from ErasureCodeExample [ RUN ] ErasureCodeExample.chunk_size [ OK ] ErasureCodeExample.chunk_size (0 ms) [ RUN ] ErasureCodeExample.minimum_to_decode [ OK ] ErasureCodeExample.minimum_to_decode (0 ms) [ RUN ] ErasureCodeExample.minimum_to_decode_with_cost [ OK ] ErasureCodeExample.minimum_to_decode_with_cost (0 ms) [ RUN ] ErasureCodeExample.encode_decode [ OK ] ErasureCodeExample.encode_decode (0 ms) [ RUN ] ErasureCodeExample.decode [ OK ] ErasureCodeExample.decode (0 ms) [ RUN ] ErasureCodeExample.create_ruleset 2014-10-08 11:12:33.833452 2b2f52f04a40 1 insert_item existing bucket has type '' != 'root' 2014-10-08 11:12:33.833511 2b2f52f04a40 1 insert_item existing bucket has type '' != 'root' [ OK ] ErasureCodeExample.create_ruleset (0 ms) [----------] 6 tests from ErasureCodeExample (0 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (0 ms total) [ PASSED ] 6 tests. PASS: unittest_erasure_code_example Running main() from gtest_main.cc [==========] Running 5 tests from 1 test case. [----------] Global test environment set-up. [----------] 5 tests from EncodingRoundTrip [ RUN ] EncodingRoundTrip.StringSimple [ OK ] EncodingRoundTrip.StringSimple (0 ms) [ RUN ] EncodingRoundTrip.StringEmpty [ OK ] EncodingRoundTrip.StringEmpty (0 ms) [ RUN ] EncodingRoundTrip.StringNewline [ OK ] EncodingRoundTrip.StringNewline (0 ms) [ RUN ] EncodingRoundTrip.Multimap [ OK ] EncodingRoundTrip.Multimap (0 ms) [ RUN ] EncodingRoundTrip.MultimapConstructorCounter [ OK ] EncodingRoundTrip.MultimapConstructorCounter (0 ms) [----------] 5 tests from EncodingRoundTrip (0 ms total) [----------] Global test environment tear-down [==========] 5 tests from 1 test case ran. (1 ms total) [ PASSED ] 5 tests. PASS: unittest_encoding Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from Msgr [ RUN ] Msgr.TestAddrParsing '127.0.0.1' -> '127.0.0.1:0/0' + '' '127.0.0.1 foo' -> '127.0.0.1:0/0' + ' foo' '127.0.0.1:1234 foo' -> '127.0.0.1:1234/0' + ' foo' '127.0.0.1:1234/5678 foo' -> '127.0.0.1:1234/5678' + ' foo' '1.2.3:4 a' -> '' + '' '2607:f298:4:2243::5522' -> '[2607:f298:4:2243::5522]:0/0' + '' '[2607:f298:4:2243::5522]' -> '[2607:f298:4:2243::5522]:0/0' + '' '2607:f298:4:2243::5522a' -> '' + '' '[2607:f298:4:2243::5522]a' -> '[2607:f298:4:2243::5522]:0/0' + 'a' '[2607:f298:4:2243::5522]:1234a' -> '[2607:f298:4:2243::5522]:1234/0' + 'a' '2001:0db8:85a3:0000:0000:8a2e:0370:7334' -> '[2001:db8:85a3::8a2e:370:7334]:0/0' + '' '2001:2db8:85a3:4334:4324:8a2e:1370:7334' -> '[2001:2db8:85a3:4334:4324:8a2e:1370:7334]:0/0' + '' '::' -> '[::]:0/0' + '' '::zz' -> '[::]:0/0' + 'zz' ':: 12:34' -> '[::]:0/0' + ' 12:34' [ OK ] Msgr.TestAddrParsing (0 ms) [----------] 1 test from Msgr (0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. PASS: unittest_addrs Running main() from gtest_main.cc [==========] Running 7 tests from 1 test case. [----------] Global test environment set-up. [----------] 7 tests from BloomFilter [ RUN ] BloomFilter.Basic [ OK ] BloomFilter.Basic (0 ms) [ RUN ] BloomFilter.Empty [ OK ] BloomFilter.Empty (0 ms) [ RUN ] BloomFilter.Sweep # max fpp actual size B/insert 16 0.00100 0.00125 71 4.43750 16 0.00400 0.00375 65 4.06250 16 0.01600 0.04625 60 3.75000 16 0.06400 0.09375 54 3.37500 16 0.25600 0.25125 48 3.00000 64 0.00100 0.00141 157 2.45312 64 0.00400 0.00297 134 2.09375 64 0.01600 0.02594 111 1.73438 64 0.06400 0.07891 88 1.37500 64 0.25600 0.32734 65 1.01562 256 0.00100 0.00121 502 1.96094 256 0.00400 0.00410 410 1.60156 256 0.01600 0.01863 318 1.24219 256 0.06400 0.06703 225 0.87891 256 0.25600 0.25637 133 0.51953 1024 0.00100 0.00098 1883 1.83887 1024 0.00400 0.00505 1513 1.47754 1024 0.01600 0.01807 1144 1.11719 1024 0.06400 0.06472 775 0.75684 1024 0.25600 0.25829 405 0.39551 4096 0.00100 0.00103 7404 1.80762 4096 0.00400 0.00384 5926 1.44678 4096 0.01600 0.01527 4449 1.08618 4096 0.06400 0.06314 2972 0.72559 4096 0.25600 0.25132 1495 0.36499 [ OK ] BloomFilter.Sweep (3705 ms) [ RUN ] BloomFilter.SweepInt # max fpp actual size B/insert density approx_element_count 16 0.00100 0.00187 71 4.43750 0.51724 16.55172 16 0.00400 0.01125 65 4.06250 0.55435 17.73913 16 0.01600 0.01625 60 3.75000 0.48611 15.55556 16 0.06400 0.08375 54 3.37500 0.54167 17.33333 16 0.25600 0.29812 48 3.00000 0.54167 17.33333 64 0.00100 0.00047 157 2.45312 0.51304 65.66957 64 0.00400 0.00422 134 2.09375 0.51495 65.91304 64 0.01600 0.01797 111 1.73438 0.51268 65.62319 64 0.06400 0.07312 88 1.37500 0.52446 67.13043 64 0.25600 0.30344 65 1.01562 0.54891 70.26087 256 0.00100 0.00090 502 1.96094 0.50625 259.20000 256 0.00400 0.00297 410 1.60156 0.49524 253.56522 256 0.01600 0.01727 318 1.24219 0.50815 260.17391 256 0.06400 0.06730 225 0.87891 0.51298 262.64481 256 0.25600 0.29219 133 0.51953 0.53846 275.69231 1024 0.00100 0.00102 1883 1.83887 0.50244 1029.00598 1024 0.00400 0.00426 1513 1.47754 0.50331 1030.78722 1024 0.01600 0.01791 1144 1.11719 0.50975 1043.97822 1024 0.06400 0.06703 775 0.75684 0.50853 1041.46248 1024 0.25600 0.25455 405 0.39551 0.50275 1029.64187 4096 0.00100 0.00105 7404 1.80762 0.49941 4091.13176 4096 0.00400 0.00386 5926 1.44678 0.50200 4112.35894 4096 0.01600 0.01613 4449 1.08618 0.50221 4114.12389 4096 0.06400 0.06375 2972 0.72559 0.50290 4119.76519 4096 0.25600 0.25786 1495 0.36499 0.50774 4159.42739 [ OK ] BloomFilter.SweepInt (352 ms) [ RUN ] BloomFilter.CompressibleSweep # max ins est ins after tgtfpp actual size b/elem 1024 1024 1056 1056 0.01000 0.00962 1288 1.25781 1024 512 619 527 0.01000 0.00936 682 0.66602 1024 341 441 353 0.01000 0.00997 477 0.46582 1024 256 341 264 0.01000 0.00987 375 0.36621 1024 204 277 212 0.01000 0.00984 313 0.30566 1024 170 235 177 0.01000 0.01078 272 0.26562 1024 146 203 150 0.01000 0.00995 243 0.23730 1024 128 178 133 0.01000 0.01110 221 0.21582 1024 113 158 119 0.01000 0.01173 204 0.19922 [ OK ] BloomFilter.CompressibleSweep (187 ms) [ RUN ] BloomFilter.BinSweep total_inserts 16384 target-fpp 0.01000 bins 1 bin-max 16384 bin-fpp 0.01000 actual-fpp 0.01012 total-size 19689 bins 2 bin-max 8192 bin-fpp 0.00500 actual-fpp 0.01014 total-size 22684 bins 3 bin-max 5461 bin-fpp 0.00333 actual-fpp 0.00994 total-size 24444 bins 4 bin-max 4096 bin-fpp 0.00250 actual-fpp 0.01012 total-size 25720 bins 5 bin-max 3276 bin-fpp 0.00200 actual-fpp 0.01015 total-size 26695 bins 6 bin-max 2730 bin-fpp 0.00167 actual-fpp 0.01012 total-size 27522 bins 7 bin-max 2340 bin-fpp 0.00143 actual-fpp 0.01021 total-size 28238 bins 8 bin-max 2048 bin-fpp 0.00125 actual-fpp 0.01004 total-size 28848 bins 9 bin-max 1820 bin-fpp 0.00111 actual-fpp 0.01002 total-size 29376 bins 10 bin-max 1638 bin-fpp 0.00100 actual-fpp 0.01025 total-size 29860 bins 11 bin-max 1489 bin-fpp 0.00091 actual-fpp 0.01059 total-size 30305 bins 12 bin-max 1365 bin-fpp 0.00083 actual-fpp 0.00990 total-size 30732 bins 13 bin-max 1260 bin-fpp 0.00077 actual-fpp 0.01013 total-size 31122 bins 14 bin-max 1170 bin-fpp 0.00071 actual-fpp 0.01047 total-size 31486 bins 15 bin-max 1092 bin-fpp 0.00067 actual-fpp 0.00999 total-size 31815 [ OK ] BloomFilter.BinSweep (3663 ms) [ RUN ] BloomFilter.Assignement [ OK ] BloomFilter.Assignement (0 ms) [----------] 7 tests from BloomFilter (7908 ms total) [----------] Global test environment tear-down [==========] 7 tests from 1 test case ran. (7908 ms total) [ PASSED ] 7 tests. PASS: unittest_bloom_filter Running main() from gtest_main.cc [==========] Running 8 tests from 1 test case. [----------] Global test environment set-up. [----------] 8 tests from Histogram [ RUN ] Histogram.Basic [ OK ] Histogram.Basic (0 ms) [ RUN ] Histogram.Set [ OK ] Histogram.Set (0 ms) [ RUN ] Histogram.Position [ OK ] Histogram.Position (0 ms) [ RUN ] Histogram.Position1 [ OK ] Histogram.Position1 (0 ms) [ RUN ] Histogram.Position2 [ OK ] Histogram.Position2 (0 ms) [ RUN ] Histogram.Position3 [ OK ] Histogram.Position3 (0 ms) [ RUN ] Histogram.Position4 [ OK ] Histogram.Position4 (0 ms) [ RUN ] Histogram.Decay [ OK ] Histogram.Decay (0 ms) [----------] 8 tests from Histogram (0 ms total) [----------] Global test environment tear-down [==========] 8 tests from 1 test case ran. (0 ms total) [ PASSED ] 8 tests. PASS: unittest_histogram Running main() from gtest_main.cc [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from str_map [ RUN ] str_map.json [ OK ] str_map.json (0 ms) [ RUN ] str_map.plaintext [ OK ] str_map.plaintext (0 ms) [----------] 2 tests from str_map (0 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (0 ms total) [ PASSED ] 2 tests. PASS: unittest_str_map 2014-10-08 11:12:42.510970 2b31a631fc40 -1 did not load config file, using default settings. [==========] Running 7 tests from 2 test cases. [----------] Global test environment set-up. [----------] 6 tests from SharedPtrRegistry_all [ RUN ] SharedPtrRegistry_all.lookup_or_create [ OK ] SharedPtrRegistry_all.lookup_or_create (0 ms) [ RUN ] SharedPtrRegistry_all.wait_lookup_or_create [ OK ] SharedPtrRegistry_all.wait_lookup_or_create (1 ms) [ RUN ] SharedPtrRegistry_all.lookup [ OK ] SharedPtrRegistry_all.lookup (0 ms) [ RUN ] SharedPtrRegistry_all.wait_lookup [ OK ] SharedPtrRegistry_all.wait_lookup (0 ms) [ RUN ] SharedPtrRegistry_all.get_next [ OK ] SharedPtrRegistry_all.get_next (0 ms) [ RUN ] SharedPtrRegistry_all.remove [ OK ] SharedPtrRegistry_all.remove (0 ms) [----------] 6 tests from SharedPtrRegistry_all (1 ms total) [----------] 1 test from SharedPtrRegistry_destructor [ RUN ] SharedPtrRegistry_destructor.destructor [ OK ] SharedPtrRegistry_destructor.destructor (0 ms) [----------] 1 test from SharedPtrRegistry_destructor (0 ms total) [----------] Global test environment tear-down [==========] 7 tests from 2 test cases ran. (1 ms total) [ PASSED ] 7 tests. PASS: unittest_sharedptr_registry 2014-10-08 11:12:42.523733 2b23b2fa8c40 -1 did not load config file, using default settings. [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from SharedLRU_all [ RUN ] SharedLRU_all.add [ OK ] SharedLRU_all.add (0 ms) [ RUN ] SharedLRU_all.lookup [ OK ] SharedLRU_all.lookup (0 ms) [ RUN ] SharedLRU_all.wait_lookup [ OK ] SharedLRU_all.wait_lookup (0 ms) [ RUN ] SharedLRU_all.lower_bound [ OK ] SharedLRU_all.lower_bound (0 ms) [ RUN ] SharedLRU_all.wait_lower_bound [ OK ] SharedLRU_all.wait_lower_bound (0 ms) [ RUN ] SharedLRU_all.clear [ OK ] SharedLRU_all.clear (0 ms) [----------] 6 tests from SharedLRU_all (0 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (0 ms total) [ PASSED ] 6 tests. PASS: unittest_shared_cache Running main() from gtest_main.cc [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from SloppyCRCMap [ RUN ] SloppyCRCMap.basic offset 12 len 4 has crc 1389788932 expected 595764104 [ OK ] SloppyCRCMap.basic (0 ms) [ RUN ] SloppyCRCMap.truncate offset 4 len 4 has crc 2422111912 expected 595764104 [ OK ] SloppyCRCMap.truncate (0 ms) [ RUN ] SloppyCRCMap.zero offset 4 len 4 has crc 2422111912 expected 595764104 offset 4 len 4 has crc 595764104 expected 3080238136 offset 4 len 4 has crc 2422111912 expected 3080238136 offset 0 len 4 has crc 595764104 expected 3080238136 [ OK ] SloppyCRCMap.zero (1 ms) [ RUN ] SloppyCRCMap.clone_range offset 0 len 4 has crc 2422111912 expected 595764104 offset 4 len 4 has crc 1774956772 expected 250211487 offset 16 len 4 has crc 595764104 expected 2422111912 offset 20 len 4 has crc 250211487 expected 1774956772 offset 16 len 4 has crc 595764104 expected 2422111912 offset 0 len 4 has crc 595764104 expected 2422111912 offset 4 len 4 has crc 250211487 expected 1774956772 offset 8 len 4 has crc 595764104 expected 2422111912 offset 12 len 4 has crc 250211487 expected 1774956772 [ OK ] SloppyCRCMap.clone_range (0 ms) [----------] 4 tests from SloppyCRCMap (1 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (1 ms total) [ PASSED ] 4 tests. PASS: unittest_sloppy_crc_map Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from util [ RUN ] util.unit_to_bytesize [ OK ] util.unit_to_bytesize (0 ms) [----------] 1 test from util (0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. PASS: unittest_util [==========] Running 5 tests from 1 test case. [----------] Global test environment set-up. [----------] 5 tests from CRUSH [ RUN ] CRUSH.indep_toosmall # id weight type name reweight -1 3 root default -3 3 rack rack-0 -2 1 host host-0-0 0 1 osd.0 1 -4 1 host host-0-1 1 1 osd.1 1 -5 1 host host-0-2 2 1 osd.2 1 0 -> [2,1,2147483647,0,2147483647] 2014-10-08 11:12:42.568897 2b314d980c40 -1 did not load config file, using default settings. 1 -> [2,2147483647,0,2147483647,1] 2 -> [1,0,2,2147483647,2147483647] 3 -> [1,2,0,2147483647,2147483647] 4 -> [0,2,2147483647,2147483647,1] 5 -> [1,2,2147483647,0,2147483647] 6 -> [1,2147483647,2147483647,2,0] 7 -> [2,0,2147483647,1,2147483647] 8 -> [0,2,2147483647,1,2147483647] 9 -> [1,2147483647,2,2147483647,0] 10 -> [2,2147483647,1,0,2147483647] 11 -> [2,2147483647,0,2147483647,1] 12 -> [1,2147483647,0,2,2147483647] 13 -> [0,2,1,2147483647,2147483647] 14 -> [1,2,2147483647,2147483647,0] 15 -> [0,1,2,2147483647,2147483647] 16 -> [1,2,0,2147483647,2147483647] 17 -> [2,0,1,2147483647,2147483647] 18 -> [1,2147483647,2147483647,0,2] 19 -> [1,0,2147483647,2,2147483647] 20 -> [2,1,2147483647,0,2147483647] 21 -> [0,2,2147483647,1,2147483647] 22 -> [1,0,2,2147483647,2147483647] 23 -> [2,1,2147483647,2147483647,0] 24 -> [1,2,2147483647,0,2147483647] 25 -> [1,2147483647,2147483647,2,0] 26 -> [0,1,2,2147483647,2147483647] 27 -> [0,1,2147483647,2,2147483647] 28 -> [0,1,2,2147483647,2147483647] 29 -> [0,2147483647,2,2147483647,1] 30 -> [0,2147483647,2147483647,2,1] 31 -> [2,1,2147483647,0,2147483647] 32 -> [2,0,2147483647,2147483647,1] 33 -> [2,1,2147483647,0,2147483647] 34 -> [2,2147483647,0,2147483647,1] 35 -> [2,0,2147483647,2147483647,1] 36 -> [0,1,2,2147483647,2147483647] 37 -> [2,0,2147483647,1,2147483647] 38 -> [0,2,1,2147483647,2147483647] 39 -> [2,1,0,2147483647,2147483647] 40 -> [2,2147483647,0,1,2147483647] 41 -> [1,2,0,2147483647,2147483647] 42 -> [0,1,2147483647,2147483647,2] 43 -> [1,0,2,2147483647,2147483647] 44 -> [2,1,2147483647,2147483647,0] 45 -> [1,2,0,2147483647,2147483647] 46 -> [1,2,0,2147483647,2147483647] 47 -> [2,2147483647,0,1,2147483647] 48 -> [2,0,1,2147483647,2147483647] 49 -> [2,0,1,2147483647,2147483647] 50 -> [1,0,2,2147483647,2147483647] 51 -> [0,2,1,2147483647,2147483647] 52 -> [2,2147483647,0,2147483647,1] 53 -> [0,2,1,2147483647,2147483647] 54 -> [0,1,2,2147483647,2147483647] 55 -> [1,2,0,2147483647,2147483647] 56 -> [2,2147483647,1,0,2147483647] 57 -> [2,1,2147483647,0,2147483647] 58 -> [1,0,2147483647,2,2147483647] 59 -> [0,2,2147483647,1,2147483647] 60 -> [2,0,1,2147483647,2147483647] 61 -> [0,2,2147483647,2147483647,1] 62 -> [2,0,2147483647,2147483647,1] 63 -> [1,2,0,2147483647,2147483647] 64 -> [0,2,2147483647,1,2147483647] 65 -> [0,2147483647,1,2147483647,2] 66 -> [2,0,1,2147483647,2147483647] 67 -> [0,1,2147483647,2147483647,2] 68 -> [2,1,0,2147483647,2147483647] 69 -> [0,2,2147483647,2147483647,1] 70 -> [2,1,0,2147483647,2147483647] 71 -> [2,1,0,2147483647,2147483647] 72 -> [2,2147483647,0,2147483647,1] 73 -> [0,1,2147483647,2,2147483647] 74 -> [1,2147483647,2,0,2147483647] 75 -> [2,1,0,2147483647,2147483647] 76 -> [1,2,2147483647,2147483647,0] 77 -> [1,2,0,2147483647,2147483647] 78 -> [2,1,2147483647,0,2147483647] 79 -> [0,2147483647,2,2147483647,1] 80 -> [2,0,1,2147483647,2147483647] 81 -> [2,1,0,2147483647,2147483647] 82 -> [2,1,2147483647,2147483647,0] 83 -> [2,0,1,2147483647,2147483647] 84 -> [2,1,0,2147483647,2147483647] 85 -> [0,2147483647,2147483647,2,1] 86 -> [1,2147483647,2,0,2147483647] 87 -> [2,2147483647,2147483647,1,0] 88 -> [2,0,1,2147483647,2147483647] 89 -> [0,2,2147483647,2147483647,1] 90 -> [1,2,2147483647,0,2147483647] 91 -> [1,2,2147483647,0,2147483647] 92 -> [2,1,0,2147483647,2147483647] 93 -> [2,0,1,2147483647,2147483647] 94 -> [2,0,1,2147483647,2147483647] 95 -> [1,0,2147483647,2,2147483647] 96 -> [0,2,1,2147483647,2147483647] 97 -> [2,2147483647,1,0,2147483647] 98 -> [2,1,2147483647,2147483647,0] 99 -> [0,2147483647,2,1,2147483647] [ OK ] CRUSH.indep_toosmall (4 ms) [ RUN ] CRUSH.indep_basic # id weight type name reweight -1 27 root default -3 9 rack rack-0 -2 3 host host-0-0 0 1 osd.0 1 1 1 osd.1 1 2 1 osd.2 1 -4 3 host host-0-1 3 1 osd.3 1 4 1 osd.4 1 5 1 osd.5 1 -5 3 host host-0-2 6 1 osd.6 1 7 1 osd.7 1 8 1 osd.8 1 -7 9 rack rack-1 -6 3 host host-1-0 9 1 osd.9 1 10 1 osd.10 1 11 1 osd.11 1 -8 3 host host-1-1 12 1 osd.12 1 13 1 osd.13 1 14 1 osd.14 1 -9 3 host host-1-2 15 1 osd.15 1 16 1 osd.16 1 17 1 osd.17 1 -11 9 rack rack-2 -10 3 host host-2-0 18 1 osd.18 1 19 1 osd.19 1 20 1 osd.20 1 -12 3 host host-2-1 21 1 osd.21 1 22 1 osd.22 1 23 1 osd.23 1 -13 3 host host-2-2 24 1 osd.24 1 25 1 osd.25 1 26 1 osd.26 1 0 -> [11,3,23,2,7] 1 -> [9,7,1,3,13] 2 -> [17,9,22,5,14] 3 -> [4,6,0,10,15] 4 -> [14,22,19,0,9] 5 -> [3,25,10,22,1] 6 -> [10,3,17,25,1] 7 -> [19,10,25,21,12] 8 -> [10,6,0,22,14] 9 -> [22,18,8,16,0] 10 -> [25,12,5,9,20] 11 -> [13,24,18,22,5] 12 -> [24,18,22,8,2] 13 -> [15,6,14,25,2] 14 -> [13,8,24,20,0] 15 -> [14,4,25,10,15] 16 -> [22,12,0,8,17] 17 -> [24,0,11,8,5] 18 -> [3,17,25,1,18] 19 -> [5,20,2,21,15] 20 -> [14,2,3,20,24] 21 -> [14,17,1,9,3] 22 -> [20,16,11,6,4] 23 -> [9,25,5,8,17] 24 -> [12,20,6,0,4] 25 -> [12,16,20,26,11] 26 -> [18,4,17,10,13] 27 -> [23,25,14,0,20] 28 -> [2,13,23,7,4] 29 -> [2,19,6,24,22] 30 -> [11,0,20,23,17] 31 -> [8,4,21,25,18] 32 -> [19,2,7,11,13] 33 -> [14,5,21,24,9] 34 -> [13,25,21,6,4] 35 -> [6,1,13,10,25] 36 -> [1,23,16,13,4] 37 -> [25,17,22,11,20] 38 -> [1,8,3,14,9] 39 -> [16,9,22,8,24] 40 -> [23,18,15,26,13] 41 -> [19,6,0,25,5] 42 -> [18,3,14,11,21] 43 -> [10,0,21,6,16] 44 -> [11,20,2,17,22] 45 -> [3,24,16,12,9] 46 -> [16,21,26,14,11] 47 -> [25,15,10,6,2] 48 -> [17,6,19,26,21] 49 -> [20,12,4,17,7] 50 -> [14,9,21,17,6] 51 -> [1,20,13,5,23] 52 -> [14,16,8,24,3] 53 -> [23,0,19,9,16] 54 -> [18,3,16,14,10] 55 -> [3,8,22,9,25] 56 -> [18,16,3,2,14] 57 -> [16,20,7,26,21] 58 -> [16,11,23,0,14] 59 -> [0,14,24,20,9] 60 -> [22,19,1,8,4] 61 -> [24,14,2,17,4] 62 -> [14,17,9,6,19] 63 -> [18,7,5,24,9] 64 -> [19,26,11,4,6] 65 -> [2,14,3,9,8] 66 -> [12,18,4,6,26] 67 -> [0,18,21,13,6] 68 -> [8,17,24,18,10] 69 -> [20,6,26,15,5] 70 -> [9,23,14,26,16] 71 -> [25,5,2,22,6] 72 -> [6,9,16,13,2] 73 -> [21,3,24,6,16] 74 -> [11,3,13,7,19] 75 -> [14,19,17,23,6] 76 -> [12,7,10,22,26] 77 -> [20,25,0,9,13] 78 -> [17,24,9,1,12] 79 -> [16,6,14,19,3] 80 -> [13,24,5,8,0] 81 -> [12,5,1,8,23] 82 -> [23,10,6,5,26] 83 -> [15,2,6,26,13] 84 -> [10,24,6,20,17] 85 -> [25,13,17,19,3] 86 -> [4,7,24,16,12] 87 -> [15,20,21,4,13] 88 -> [10,1,16,23,25] 89 -> [22,20,14,16,6] 90 -> [3,7,24,21,15] 91 -> [11,22,8,0,17] 92 -> [17,7,24,19,22] 93 -> [23,12,1,9,15] 94 -> [15,23,13,5,11] 95 -> [13,1,26,3,11] 96 -> [2,14,18,5,9] 97 -> [21,19,11,2,12] 98 -> [18,16,6,10,1] 99 -> [0,15,25,3,9] [ OK ] CRUSH.indep_basic (6 ms) [ RUN ] CRUSH.indep_out_alt # id weight type name reweight -1 27 root default -3 9 rack rack-0 -2 3 host host-0-0 0 1 osd.0 0 1 1 osd.1 1 2 1 osd.2 0 -4 3 host host-0-1 3 1 osd.3 1 4 1 osd.4 0 5 1 osd.5 1 -5 3 host host-0-2 6 1 osd.6 0 7 1 osd.7 1 8 1 osd.8 0 -7 9 rack rack-1 -6 3 host host-1-0 9 1 osd.9 1 10 1 osd.10 0 11 1 osd.11 1 -8 3 host host-1-1 12 1 osd.12 0 13 1 osd.13 1 14 1 osd.14 0 -9 3 host host-1-2 15 1 osd.15 1 16 1 osd.16 0 17 1 osd.17 1 -11 9 rack rack-2 -10 3 host host-2-0 18 1 osd.18 0 19 1 osd.19 1 20 1 osd.20 0 -12 3 host host-2-1 21 1 osd.21 1 22 1 osd.22 0 23 1 osd.23 1 -13 3 host host-2-2 24 1 osd.24 0 25 1 osd.25 1 26 1 osd.26 1 0 -> [11,3,23,1,7,19,13,17,25] 1 -> [9,7,1,19,13,25,5,15,21] 2 -> [17,9,3,25,13,1,19,21,7] 3 -> [5,7,21,11,15,1,25,19,13] 4 -> [13,23,26,1,9,15,7,5,19] 5 -> [3,25,11,23,13,7,1,19,17] 6 -> [11,3,13,25,1,7,19,17,21] 7 -> [19,11,25,21,13,1,15,7,3] 8 -> [11,25,1,23,13,19,5,7,17] 9 -> [23,19,7,15,1,9,26,13,3] 10 -> [25,13,5,1,19,17,7,23,9] 11 -> [13,25,19,21,15,7,3,1,9] 12 -> [26,19,15,7,1,9,13,21,3] 13 -> [15,7,13,25,1,23,19,5,9] 14 -> [13,7,25,19,1,23,15,11,5] 15 -> [13,5,25,9,15,21,7,1,19] 16 -> [21,25,1,7,17,11,19,13,3] 17 -> [26,1,11,7,13,17,19,5,21] 18 -> [3,17,25,1,19,9,13,23,7] 19 -> [5,19,1,13,15,7,11,26,23] 20 -> [13,11,3,19,17,7,1,21,25] 21 -> [13,17,26,9,3,23,1,19,7] 22 -> [19,15,11,26,3,1,21,7,13] 23 -> [9,25,5,7,15,19,13,23,1] 24 -> [13,19,21,1,5,11,26,7,17] 25 -> [13,17,19,3,11,1,21,7,26] 26 -> [19,3,17,11,1,26,13,21,7] 27 -> [23,25,3,1,19,15,13,11,7] 28 -> [1,13,23,7,11,26,19,17,5] 29 -> [1,11,7,25,21,13,19,5,17] 30 -> [11,1,13,23,17,7,19,26,5] 31 -> [7,5,11,21,19,13,17,1,26] 32 -> [19,1,7,11,13,21,25,17,5] 33 -> [13,5,21,26,1,15,9,19,7] 34 -> [13,25,21,17,5,1,7,19,11] 35 -> [7,1,13,11,3,26,19,17,21] 36 -> [1,23,7,13,5,19,26,17,11] 37 -> [25,17,21,11,19,13,7,1,3] 38 -> [1,7,3,13,17,11,19,25,21] 39 -> [17,13,23,7,25,11,1,19,3] 40 -> [23,19,9,26,5,1,7,15,13] 41 -> [19,7,1,25,5,23,13,17,9] 42 -> [19,3,13,11,21,17,7,26,1] 43 -> [9,13,21,7,26,19,1,3,15] 44 -> [11,19,13,17,21,7,3,1,26] 45 -> [3,26,15,13,9,7,19,1,23] 46 -> [15,21,26,13,11,1,7,3,19] 47 -> [25,19,5,23,1,7,15,9,13] 48 -> [17,7,19,26,1,5,21,9,13] 49 -> [19,13,3,17,11,26,1,23,7] 50 -> [13,9,21,17,7,25,5,1,19] 51 -> [1,19,13,5,23,17,7,9,25] 52 -> [13,15,7,25,3,19,1,9,21] 53 -> [23,1,19,9,17,13,3,7,26] 54 -> [19,3,17,13,11,21,1,26,7] 55 -> [3,7,23,9,25,15,1,19,13] 56 -> [19,17,3,1,25,7,23,11,13] 57 -> [17,19,7,26,21,5,1,13,11] 58 -> [15,11,23,1,13,19,5,26,7] 59 -> [1,13,26,19,3,11,7,15,21] 60 -> [23,19,25,7,3,1,13,17,9] 61 -> [25,13,19,17,5,7,21,1,9] 62 -> [13,17,9,7,19,21,1,26,5] 63 -> [19,1,5,25,13,21,11,17,7] 64 -> [19,26,11,5,1,21,13,17,7] 65 -> [1,13,3,19,7,21,11,26,17] 66 -> [13,19,5,7,17,21,26,1,11] 67 -> [1,19,5,9,7,17,21,26,13] 68 -> [7,17,5,1,11,19,13,26,21] 69 -> [19,7,26,1,5,11,15,21,13] 70 -> [9,23,7,26,15,19,13,1,3] 71 -> [25,5,1,23,7,15,19,13,9] 72 -> [7,9,17,13,5,19,1,21,26] 73 -> [21,3,26,7,15,1,13,11,19] 74 -> [11,3,1,7,26,13,21,17,19] 75 -> [13,19,3,17,7,9,1,26,23] 76 -> [13,7,11,23,26,19,5,1,17] 77 -> [19,25,1,9,7,13,15,23,3] 78 -> [17,26,9,1,13,23,7,19,5] 79 -> [17,7,13,19,3,1,21,26,11] 80 -> [13,26,23,7,1,9,17,3,19] 81 -> [13,5,1,19,7,17,26,21,9] 82 -> [23,11,7,13,26,17,19,1,3] 83 -> [15,1,7,26,21,9,19,13,5] 84 -> [9,25,7,19,17,5,23,1,13] 85 -> [25,13,17,19,7,23,5,9,1] 86 -> [5,9,25,15,13,21,7,1,19] 87 -> [15,19,21,3,7,26,1,9,13] 88 -> [9,1,5,23,25,13,7,15,19] 89 -> [23,19,13,15,1,7,25,5,11] 90 -> [3,7,25,21,15,1,19,11,13] 91 -> [11,23,26,1,17,19,13,7,5] 92 -> [17,7,26,9,19,21,5,13,1] 93 -> [23,13,1,9,15,7,3,26,19] 94 -> [15,23,13,5,7,11,19,1,25] 95 -> [13,1,19,3,15,26,11,21,7] 96 -> [1,13,19,5,9,15,7,25,23] 97 -> [21,19,11,1,13,26,5,7,17] 98 -> [19,17,7,9,23,25,5,1,13] 99 -> [1,15,25,3,9,23,7,19,13] [ OK ] CRUSH.indep_out_alt (4 ms) [ RUN ] CRUSH.indep_out_contig # id weight type name reweight -1 27 root default -3 9 rack rack-0 -2 3 host host-0-0 0 1 osd.0 0 1 1 osd.1 0 2 1 osd.2 0 -4 3 host host-0-1 3 1 osd.3 0 4 1 osd.4 0 5 1 osd.5 0 -5 3 host host-0-2 6 1 osd.6 0 7 1 osd.7 0 8 1 osd.8 0 -7 9 rack rack-1 -6 3 host host-1-0 9 1 osd.9 1 10 1 osd.10 1 11 1 osd.11 1 -8 3 host host-1-1 12 1 osd.12 1 13 1 osd.13 1 14 1 osd.14 1 -9 3 host host-1-2 15 1 osd.15 1 16 1 osd.16 1 17 1 osd.17 1 -11 9 rack rack-2 -10 3 host host-2-0 18 1 osd.18 1 19 1 osd.19 1 20 1 osd.20 1 -12 3 host host-2-1 21 1 osd.21 1 22 1 osd.22 1 23 1 osd.23 1 -13 3 host host-2-2 24 1 osd.24 1 25 1 osd.25 1 26 1 osd.26 1 0 -> [11,17,23,2147483647,25,20,14] 1 -> [9,19,25,2147483647,13,22,15] 2 -> [17,9,2147483647,21,14,25,18] 3 -> [2147483647,14,18,10,15,26,23] 4 -> [14,22,18,24,9,16,2147483647] 5 -> [2147483647,25,10,22,13,20,15] 6 -> [10,2147483647,12,25,15,22,20] 7 -> [19,10,25,21,12,2147483647,17] 8 -> [10,17,2147483647,22,14,20,25] 9 -> [22,18,13,16,25,9,2147483647] 10 -> [25,12,9,2147483647,20,21,17] 11 -> [13,24,18,22,2147483647,11,16] 12 -> [24,18,14,11,16,23,2147483647] 13 -> [15,9,14,25,20,2147483647,21] 14 -> [13,2147483647,24,20,10,21,15] 15 -> [14,2147483647,25,19,15,22,9] 16 -> [22,10,18,26,17,2147483647,12] 17 -> [24,14,11,15,2147483647,23,19] 18 -> [10,17,25,13,18,22,2147483647] 19 -> [22,20,24,2147483647,15,13,10] 20 -> [14,10,24,20,15,2147483647,21] 21 -> [14,17,19,9,21,2147483647,24] 22 -> [20,16,11,12,21,25,2147483647] 23 -> [9,25,15,2147483647,21,18,13] 24 -> [12,20,23,15,2147483647,11,26] 25 -> [12,16,20,21,11,2147483647,24] 26 -> [18,2147483647,17,10,21,13,24] 27 -> [23,25,15,9,20,2147483647,12] 28 -> [15,13,23,9,26,2147483647,19] 29 -> [2147483647,16,9,24,22,12,20] 30 -> [11,12,24,23,17,2147483647,20] 31 -> [2147483647,24,22,11,18,12,15] 32 -> [19,2147483647,15,11,13,26,22] 33 -> [14,18,21,24,2147483647,15,10] 34 -> [13,25,21,18,16,11,2147483647] 35 -> [22,19,13,10,2147483647,26,15] 36 -> [17,23,9,13,26,20,2147483647] 37 -> [25,17,22,11,20,12,2147483647] 38 -> [25,23,11,14,2147483647,20,17] 39 -> [16,13,22,20,24,10,2147483647] 40 -> [23,18,13,26,2147483647,17,9] 41 -> [19,14,2147483647,25,10,23,15] 42 -> [18,2147483647,14,11,21,17,26] 43 -> [10,17,21,13,19,25,2147483647] 44 -> [11,20,24,17,22,14,2147483647] 45 -> [2147483647,24,16,12,9,21,19] 46 -> [16,21,26,14,11,19,2147483647] 47 -> [25,22,2147483647,10,18,13,15] 48 -> [17,12,19,26,10,2147483647,21] 49 -> [20,12,23,17,9,26,2147483647] 50 -> [14,9,21,18,16,24,2147483647] 51 -> [2147483647,20,13,10,23,16,26] 52 -> [14,22,16,24,2147483647,20,11] 53 -> [23,26,19,9,16,12,2147483647] 54 -> [18,24,16,14,10,21,2147483647] 55 -> [16,13,22,9,25,2147483647,19] 56 -> [18,16,2147483647,11,21,13,24] 57 -> [16,20,2147483647,26,21,9,13] 58 -> [16,11,23,18,14,2147483647,25] 59 -> [17,14,24,20,23,2147483647,11] 60 -> [22,19,2147483647,24,16,9,13] 61 -> [24,14,2147483647,17,19,11,22] 62 -> [14,17,9,24,19,21,2147483647] 63 -> [18,16,9,24,2147483647,22,14] 64 -> [19,26,11,16,13,21,2147483647] 65 -> [19,14,26,17,2147483647,21,10] 66 -> [12,18,21,17,10,2147483647,24] 67 -> [17,18,23,9,2147483647,24,13] 68 -> [25,17,12,21,10,18,2147483647] 69 -> [20,23,26,12,2147483647,11,15] 70 -> [9,23,2147483647,26,16,18,12] 71 -> [25,19,15,22,9,2147483647,14] 72 -> [2147483647,9,22,13,25,19,16] 73 -> [21,12,24,18,16,2147483647,11] 74 -> [11,18,15,2147483647,26,13,21] 75 -> [14,19,2147483647,24,22,9,16] 76 -> [12,16,10,22,26,18,2147483647] 77 -> [20,25,2147483647,9,15,14,23] 78 -> [17,24,9,2147483647,20,22,13] 79 -> [16,23,14,19,11,2147483647,25] 80 -> [13,24,15,2147483647,23,10,18] 81 -> [12,20,23,11,2147483647,17,26] 82 -> [23,10,18,12,26,17,2147483647] 83 -> [15,2147483647,14,26,23,9,20] 84 -> [10,24,14,20,17,21,2147483647] 85 -> [25,13,17,19,9,23,2147483647] 86 -> [9,20,24,16,13,22,2147483647] 87 -> [15,20,21,2147483647,12,24,10] 88 -> [10,2147483647,20,23,25,13,17] 89 -> [22,20,14,16,10,25,2147483647] 90 -> [20,2147483647,24,21,15,13,10] 91 -> [11,22,24,2147483647,17,18,14] 92 -> [17,23,24,2147483647,13,9,18] 93 -> [23,12,2147483647,24,15,19,9] 94 -> [15,23,13,26,10,2147483647,19] 95 -> [13,21,11,2147483647,16,26,18] 96 -> [25,14,18,23,9,17,2147483647] 97 -> [21,19,11,24,12,16,2147483647] 98 -> [18,16,14,10,23,24,2147483647] 99 -> [19,15,25,21,9,2147483647,12] [ OK ] CRUSH.indep_out_contig (16 ms) [ RUN ] CRUSH.indep_out_progressive # id weight type name reweight -1 27 root default -3 9 rack rack-0 -2 3 host host-0-0 0 1 osd.0 1 1 1 osd.1 1 2 1 osd.2 1 -4 3 host host-0-1 3 1 osd.3 1 4 1 osd.4 1 5 1 osd.5 1 -5 3 host host-0-2 6 1 osd.6 1 7 1 osd.7 1 8 1 osd.8 1 -7 9 rack rack-1 -6 3 host host-1-0 9 1 osd.9 1 10 1 osd.10 1 11 1 osd.11 1 -8 3 host host-1-1 12 1 osd.12 1 13 1 osd.13 1 14 1 osd.14 1 -9 3 host host-1-2 15 1 osd.15 1 16 1 osd.16 1 17 1 osd.17 1 -11 9 rack rack-2 -10 3 host host-2-0 18 1 osd.18 1 19 1 osd.19 1 20 1 osd.20 1 -12 3 host host-2-1 21 1 osd.21 1 22 1 osd.22 1 23 1 osd.23 1 -13 3 host host-2-2 24 1 osd.24 1 25 1 osd.25 1 26 1 osd.26 1 (0/27 out) 1 -> [9,7,1,26,13,22,5] (1/27 out) 1 -> [9,7,1,26,13,22,5] (2/27 out) 1 -> [9,7,2,26,13,22,5] 0 moved, 1 changed (3/27 out) 1 -> [9,7,25,15,13,22,5] 0 moved, 2 changed (4/27 out) 1 -> [9,7,25,15,13,22,5] (5/27 out) 1 -> [9,7,25,15,13,22,5] (6/27 out) 1 -> [9,7,25,19,13,22,15] 15 moved from 3 to 6 1 moved, 2 changed (7/27 out) 1 -> [9,7,25,19,13,22,15] (8/27 out) 1 -> [9,8,25,19,13,22,15] 0 moved, 1 changed (9/27 out) 1 -> [9,19,25,2147483647,13,22,15] 19 moved from 3 to 1 1 moved, 2 changed (10/27 out) 1 -> [10,19,25,2147483647,13,22,15] 0 moved, 1 changed (11/27 out) 1 -> [11,19,25,2147483647,13,22,15] 0 moved, 1 changed (12/27 out) 1 -> [25,19,2147483647,2147483647,13,22,15] 25 moved from 2 to 0 1 moved, 2 changed (13/27 out) 1 -> [25,19,2147483647,2147483647,13,22,15] (14/27 out) 1 -> [25,19,2147483647,2147483647,14,22,15] 0 moved, 1 changed (15/27 out) 1 -> [25,19,2147483647,2147483647,21,2147483647,15] 0 moved, 2 changed (16/27 out) 1 -> [25,19,2147483647,2147483647,21,2147483647,16] 0 moved, 1 changed (17/27 out) 1 -> [25,19,2147483647,2147483647,21,2147483647,17] 0 moved, 1 changed (18/27 out) 1 -> [25,19,2147483647,2147483647,21,2147483647,2147483647] 0 moved, 1 changed (19/27 out) 1 -> [25,19,2147483647,2147483647,21,2147483647,2147483647] (20/27 out) 1 -> [25,20,2147483647,2147483647,21,2147483647,2147483647] 0 moved, 1 changed (21/27 out) 1 -> [25,2147483647,2147483647,2147483647,21,2147483647,2147483647] 0 moved, 1 changed (22/27 out) 1 -> [25,2147483647,2147483647,2147483647,22,2147483647,2147483647] 0 moved, 1 changed (23/27 out) 1 -> [25,2147483647,2147483647,2147483647,23,2147483647,2147483647] 0 moved, 1 changed (24/27 out) 1 -> [25,2147483647,2147483647,2147483647,2147483647,2147483647,2147483647] 0 moved, 1 changed (25/27 out) 1 -> [25,2147483647,2147483647,2147483647,2147483647,2147483647,2147483647] (26/27 out) 1 -> [26,2147483647,2147483647,2147483647,2147483647,2147483647,2147483647] 0 moved, 1 changed (0/27 out) 2 -> [17,9,4,25,14,0,18] (1/27 out) 2 -> [17,9,4,25,14,1,18] 0 moved, 1 changed (2/27 out) 2 -> [17,9,4,25,14,2,18] 0 moved, 1 changed (3/27 out) 2 -> [17,9,4,21,14,25,18] 25 moved from 3 to 5 1 moved, 2 changed (4/27 out) 2 -> [17,9,4,21,14,25,18] (5/27 out) 2 -> [17,9,5,21,14,25,18] 0 moved, 1 changed (6/27 out) 2 -> [17,9,8,21,14,25,18] 0 moved, 1 changed (7/27 out) 2 -> [17,9,8,21,14,25,18] (8/27 out) 2 -> [17,9,8,21,14,25,18] (9/27 out) 2 -> [17,9,2147483647,21,14,25,18] 0 moved, 1 changed (10/27 out) 2 -> [17,11,2147483647,21,14,25,18] 0 moved, 1 changed (11/27 out) 2 -> [17,11,2147483647,21,14,25,18] (12/27 out) 2 -> [17,2147483647,2147483647,21,14,25,18] 0 moved, 1 changed (13/27 out) 2 -> [17,2147483647,2147483647,21,14,25,18] (14/27 out) 2 -> [17,2147483647,2147483647,21,14,25,18] (15/27 out) 2 -> [17,2147483647,2147483647,21,2147483647,25,18] 0 moved, 1 changed (16/27 out) 2 -> [17,2147483647,2147483647,21,2147483647,25,18] (17/27 out) 2 -> [17,2147483647,2147483647,21,2147483647,25,18] (18/27 out) 2 -> [22,2147483647,2147483647,2147483647,2147483647,25,18] 0 moved, 2 changed (19/27 out) 2 -> [22,2147483647,2147483647,2147483647,2147483647,25,19] 0 moved, 1 changed (20/27 out) 2 -> [22,2147483647,2147483647,2147483647,2147483647,25,20] 0 moved, 1 changed (21/27 out) 2 -> [22,2147483647,2147483647,2147483647,2147483647,25,2147483647] 0 moved, 1 changed (22/27 out) 2 -> [22,2147483647,2147483647,2147483647,2147483647,25,2147483647] (23/27 out) 2 -> [23,2147483647,2147483647,2147483647,2147483647,25,2147483647] 0 moved, 1 changed (24/27 out) 2 -> [2147483647,2147483647,2147483647,2147483647,2147483647,25,2147483647] 0 moved, 1 changed (25/27 out) 2 -> [2147483647,2147483647,2147483647,2147483647,2147483647,25,2147483647] (26/27 out) 2 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,2147483647] 0 moved, 1 changed (0/27 out) 3 -> [4,6,18,10,15,0,12] (1/27 out) 3 -> [4,6,18,10,15,2,12] 0 moved, 1 changed (2/27 out) 3 -> [4,6,18,10,15,2,12] (3/27 out) 3 -> [4,6,18,10,15,26,12] 0 moved, 1 changed (4/27 out) 3 -> [4,6,18,10,15,26,12] (5/27 out) 3 -> [5,6,18,10,15,26,12] 0 moved, 1 changed (6/27 out) 3 -> [23,6,18,10,15,26,12] 0 moved, 1 changed (7/27 out) 3 -> [23,7,18,10,15,26,12] 0 moved, 1 changed (8/27 out) 3 -> [23,8,18,10,15,26,12] 0 moved, 1 changed (9/27 out) 3 -> [2147483647,14,18,10,15,26,23] 23 moved from 0 to 6 1 moved, 3 changed (10/27 out) 3 -> [2147483647,14,18,10,15,26,23] (11/27 out) 3 -> [2147483647,14,18,11,15,26,23] 0 moved, 1 changed (12/27 out) 3 -> [2147483647,14,18,2147483647,15,26,23] 0 moved, 1 changed (13/27 out) 3 -> [2147483647,14,18,2147483647,15,26,23] (14/27 out) 3 -> [2147483647,14,18,2147483647,15,26,23] (15/27 out) 3 -> [2147483647,2147483647,18,2147483647,15,26,23] 0 moved, 1 changed (16/27 out) 3 -> [2147483647,2147483647,18,2147483647,16,26,23] 0 moved, 1 changed (17/27 out) 3 -> [2147483647,2147483647,18,2147483647,17,26,23] 0 moved, 1 changed (18/27 out) 3 -> [2147483647,2147483647,18,2147483647,2147483647,26,23] 0 moved, 1 changed (19/27 out) 3 -> [2147483647,2147483647,20,2147483647,2147483647,26,23] 0 moved, 1 changed (20/27 out) 3 -> [2147483647,2147483647,20,2147483647,2147483647,26,23] (21/27 out) 3 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,23] 0 moved, 1 changed (22/27 out) 3 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,23] (23/27 out) 3 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,23] (24/27 out) 3 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,2147483647] 0 moved, 1 changed (25/27 out) 3 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,2147483647] (26/27 out) 3 -> [2147483647,2147483647,2147483647,2147483647,2147483647,26,2147483647] (0/27 out) 4 -> [14,22,18,0,9,16,7] (1/27 out) 4 -> [14,22,18,1,9,16,7] 0 moved, 1 changed (2/27 out) 4 -> [14,22,18,2,9,16,7] 0 moved, 1 changed (3/27 out) 4 -> [14,22,18,24,9,16,7] 0 moved, 1 changed (4/27 out) 4 -> [14,22,18,24,9,16,7] (5/27 out) 4 -> [14,22,18,24,9,16,7] (6/27 out) 4 -> [14,22,18,24,9,16,7] (7/27 out) 4 -> [14,22,18,24,9,16,7] (8/27 out) 4 -> [14,22,18,24,9,16,8] 0 moved, 1 changed (9/27 out) 4 -> [14,22,18,24,9,16,2147483647] 0 moved, 1 changed (10/27 out) 4 -> [14,22,18,24,10,16,2147483647] 0 moved, 1 changed (11/27 out) 4 -> [14,22,18,24,11,16,2147483647] 0 moved, 1 changed (12/27 out) 4 -> [14,22,18,24,2147483647,16,2147483647] 0 moved, 1 changed (13/27 out) 4 -> [14,22,18,24,2147483647,16,2147483647] (14/27 out) 4 -> [14,22,18,24,2147483647,16,2147483647] (15/27 out) 4 -> [2147483647,22,18,24,2147483647,16,2147483647] 0 moved, 1 changed (16/27 out) 4 -> [2147483647,22,18,24,2147483647,16,2147483647] (17/27 out) 4 -> [2147483647,22,18,24,2147483647,17,2147483647] 0 moved, 1 changed (18/27 out) 4 -> [2147483647,22,18,24,2147483647,2147483647,2147483647] 0 moved, 1 changed (19/27 out) 4 -> [2147483647,22,20,24,2147483647,2147483647,2147483647] 0 moved, 1 changed (20/27 out) 4 -> [2147483647,22,20,24,2147483647,2147483647,2147483647] (21/27 out) 4 -> [2147483647,22,2147483647,24,2147483647,2147483647,2147483647] 0 moved, 1 changed (22/27 out) 4 -> [2147483647,22,2147483647,24,2147483647,2147483647,2147483647] (23/27 out) 4 -> [2147483647,23,2147483647,24,2147483647,2147483647,2147483647] 0 moved, 1 changed (24/27 out) 4 -> [2147483647,2147483647,2147483647,24,2147483647,2147483647,2147483647] 0 moved, 1 changed (25/27 out) 4 -> [2147483647,2147483647,2147483647,25,2147483647,2147483647,2147483647] 0 moved, 1 changed (26/27 out) 4 -> [2147483647,2147483647,2147483647,26,2147483647,2147483647,2147483647] 0 moved, 1 changed 77 total changed [ OK ] CRUSH.indep_out_progressive (61 ms) [----------] 5 tests from CRUSH (91 ms total) [----------] Global test environment tear-down [==========] 5 tests from 1 test case ran. (91 ms total) [ PASSED ] 5 tests. PASS: unittest_crush_indep [==========] Running 10 tests from 1 test case. [----------] Global test environment set-up. [----------] 10 tests from OSDMapTest [ RUN ] OSDMapTest.Create [ OK ] OSDMapTest.Create (1 ms) [ RUN ] OSDMapTest.Features [ OK ] OSDMapTest.Features (0 ms) [ RUN ] OSDMapTest.MapPG [ OK ] OSDMapTest.MapPG (0 ms) [ RUN ] OSDMapTest.MapFunctionsMatch [ OK ] OSDMapTest.MapFunctionsMatch (1 ms) [ RUN ] OSDMapTest.PrimaryIsFirst [ OK ] OSDMapTest.PrimaryIsFirst (0 ms) [ RUN ] OSDMapTest.PGTempRespected [ OK ] OSDMapTest.PGTempRespected (1 ms) [ RUN ] OSDMapTest.PrimaryTempRespected [ OK ] OSDMapTest.PrimaryTempRespected (0 ms) [ RUN ] OSDMapTest.RemovesRedundantTemps [ OK ] OSDMapTest.RemovesRedundantTemps (0 ms) [ RUN ] OSDMapTest.KeepsNecessaryTemps [ OK ] OSDMapTest.KeepsNecessaryTemps (1 ms) [ RUN ] OSDMapTest.PrimaryAffinity pool 0 pool 1 [ OK ] OSDMapTest.PrimaryAffinity (372 ms) [----------] 10 tests from OSDMapTest (376 ms total) [----------] Global test environment tear-down [==========] 10 tests from 1 test case ran. (376 ms total) [ PASSED ] 10 tests. PASS: unittest_osdmap [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from WorkQueue [ RUN ] WorkQueue.StartStop 2014-10-08 11:12:43.081520 2abd47a1ec40 -1 did not load config file, using default settings. [ OK ] WorkQueue.StartStop (2 ms) [ RUN ] WorkQueue.Resize osd_op_threads = '5' osd_op_threads = '3' osd_op_threads = '15' osd_op_threads = '0' osd_op_threads = '-1' [ OK ] WorkQueue.Resize (7003 ms) [----------] 2 tests from WorkQueue (7005 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (7005 ms total) [ PASSED ] 2 tests. PASS: unittest_workqueue 2014-10-08 11:12:50.100599 2b3db0acdc40 -1 did not load config file, using default settings. [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from Striper [ RUN ] Striper.Stripe1 result [extent(1.00000012 (18) in @0 98304~14374 -> [7469,4096,19757,4096,32045,4096,44333,2086]),extent(1.00000013 (19) in @0 94931~15661 -> [0,3373,11565,4096,23853,4096,36141,4096]),extent(1.00000014 (20) in @0 94208~16384 -> [3373,4096,15661,4096,27949,4096,40237,4096])] [ OK ] Striper.Stripe1 (0 ms) [ RUN ] Striper.EmptyPartialResult ex [extent(1.000000ac (172) in @0 4128768~65536 -> [0,65536]),extent(1.000000ad (173) in @0 0~65536 -> [65536,65536])] [ OK ] Striper.EmptyPartialResult (0 ms) [----------] 2 tests from Striper (0 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (0 ms total) [ PASSED ] 2 tests. PASS: unittest_striper Running main() from gtest_main.cc [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from PrebufferedStreambuf [ RUN ] PrebufferedStreambuf.Empty [ OK ] PrebufferedStreambuf.Empty (0 ms) [ RUN ] PrebufferedStreambuf.Simple [ OK ] PrebufferedStreambuf.Simple (0 ms) [ RUN ] PrebufferedStreambuf.Multiline [ OK ] PrebufferedStreambuf.Multiline (0 ms) [ RUN ] PrebufferedStreambuf.Withnull [ OK ] PrebufferedStreambuf.Withnull (0 ms) [ RUN ] PrebufferedStreambuf.SimpleOverflow [ OK ] PrebufferedStreambuf.SimpleOverflow (0 ms) [ RUN ] PrebufferedStreambuf.ManyOverflow [ OK ] PrebufferedStreambuf.ManyOverflow (0 ms) [----------] 6 tests from PrebufferedStreambuf (0 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (0 ms total) [ PASSED ] 6 tests. PASS: unittest_prebufferedstreambuf Running main() from gtest_main.cc [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from StrList [ RUN ] StrList.get_str_list 'foo,bar' -> foo,bar 'foo' -> foo 'foo;bar' -> foo,bar 'foo bar' -> foo,bar ' foo bar' -> foo,bar ' foo bar ' -> foo,bar 'a,b,c' -> a,b,c ' a b c ' -> a,b,c 'a, b, c' -> a,b,c 'a b c' -> a,b,c 'a=b=c' -> a,b,c [ OK ] StrList.get_str_list (0 ms) [ RUN ] StrList.get_str_vec 'foo,bar' -> [foo,bar] 'foo' -> [foo] 'foo;bar' -> [foo,bar] 'foo bar' -> [foo,bar] ' foo bar' -> [foo,bar] ' foo bar ' -> [foo,bar] 'a,b,c' -> [a,b,c] ' a b c ' -> [a,b,c] 'a, b, c' -> [a,b,c] 'a b c' -> [a,b,c] 'a=b=c' -> [a,b,c] [ OK ] StrList.get_str_vec (0 ms) [----------] 2 tests from StrList (0 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (0 ms total) [ PASSED ] 2 tests. PASS: unittest_str_list Running main() from gtest_main.cc [==========] Running 9 tests from 1 test case. [----------] Global test environment set-up. [----------] 9 tests from Log [ RUN ] Log.Simple [ OK ] Log.Simple (3 ms) [ RUN ] Log.ManyNoGather [ OK ] Log.ManyNoGather (0 ms) [ RUN ] Log.ManyGatherLog [ OK ] Log.ManyGatherLog (76 ms) [ RUN ] Log.ManyGatherLogStringAssign [ OK ] Log.ManyGatherLogStringAssign (75 ms) [ RUN ] Log.ManyGatherLogStringAssignWithReserve [ OK ] Log.ManyGatherLogStringAssignWithReserve (49 ms) [ RUN ] Log.ManyGatherLogPrebuf [ OK ] Log.ManyGatherLogPrebuf (51 ms) [ RUN ] Log.ManyGatherLogPrebufOverflow [ OK ] Log.ManyGatherLogPrebufOverflow (45 ms) [ RUN ] Log.ManyGather [ OK ] Log.ManyGather (46 ms) [ RUN ] Log.InternalSegv [WARNING] ./src/gtest-death-test.cc:741:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads. [ OK ] Log.InternalSegv (1 ms) [----------] 9 tests from Log (347 ms total) [----------] Global test environment tear-down [==========] 9 tests from 1 test case ran. (347 ms total) [ PASSED ] 9 tests. PASS: unittest_log 2014-10-08 11:12:50.476067 2ba57dab2c40 -1 did not load config file, using default settings. [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from ThrottleTest [ RUN ] ThrottleTest.Throttle common/Throttle.cc: In function 'Throttle::Throttle(CephContext*, std::string, int64_t, bool)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.476818 common/Throttle.cc: 38: FAILED assert(m >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::Throttle(CephContext*, std::string, long, bool)+0xea) [0x85cfbe] 3: (ThrottleTest_Throttle_Test::TestBody()+0x25f) [0x8302af] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2014-10-08 11:12:50.479535 2ba57dab2c40 -1 common/Throttle.cc: In function 'Throttle::Throttle(CephContext*, std::string, int64_t, bool)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.476818 common/Throttle.cc: 38: FAILED assert(m >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::Throttle(CephContext*, std::string, long, bool)+0xea) [0x85cfbe] 3: (ThrottleTest_Throttle_Test::TestBody()+0x25f) [0x8302af] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- -14> 2014-10-08 11:12:50.475768 2ba57dab2c40 5 asok(0x3984460) register_command perfcounters_dump hook 0x39867c0 -13> 2014-10-08 11:12:50.475820 2ba57dab2c40 5 asok(0x3984460) register_command 1 hook 0x39867c0 -12> 2014-10-08 11:12:50.475834 2ba57dab2c40 5 asok(0x3984460) register_command perf dump hook 0x39867c0 -11> 2014-10-08 11:12:50.475848 2ba57dab2c40 5 asok(0x3984460) register_command perfcounters_schema hook 0x39867c0 -10> 2014-10-08 11:12:50.475857 2ba57dab2c40 5 asok(0x3984460) register_command 2 hook 0x39867c0 -9> 2014-10-08 11:12:50.475864 2ba57dab2c40 5 asok(0x3984460) register_command perf schema hook 0x39867c0 -8> 2014-10-08 11:12:50.475873 2ba57dab2c40 5 asok(0x3984460) register_command config show hook 0x39867c0 -7> 2014-10-08 11:12:50.475879 2ba57dab2c40 5 asok(0x3984460) register_command config set hook 0x39867c0 -6> 2014-10-08 11:12:50.475888 2ba57dab2c40 5 asok(0x3984460) register_command config get hook 0x39867c0 -5> 2014-10-08 11:12:50.475902 2ba57dab2c40 5 asok(0x3984460) register_command config diff hook 0x39867c0 -4> 2014-10-08 11:12:50.475910 2ba57dab2c40 5 asok(0x3984460) register_command log flush hook 0x39867c0 -3> 2014-10-08 11:12:50.475921 2ba57dab2c40 5 asok(0x3984460) register_command log dump hook 0x39867c0 -2> 2014-10-08 11:12:50.475929 2ba57dab2c40 5 asok(0x3984460) register_command log reopen hook 0x39867c0 -1> 2014-10-08 11:12:50.476067 2ba57dab2c40 -1 did not load config file, using default settings. 0> 2014-10-08 11:12:50.479535 2ba57dab2c40 -1 common/Throttle.cc: In function 'Throttle::Throttle(CephContext*, std::string, int64_t, bool)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.476818 common/Throttle.cc: 38: FAILED assert(m >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::Throttle(CephContext*, std::string, long, bool)+0xea) [0x85cfbe] 3: (ThrottleTest_Throttle_Test::TestBody()+0x25f) [0x8302af] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 keyvaluestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs -2/-2 (syslog threshold) 99/99 (stderr threshold) max_recent 500 max_new 1000 log_file --- end dump of recent events --- [ OK ] ThrottleTest.Throttle (3 ms) [ RUN ] ThrottleTest.take common/Throttle.cc: In function 'int64_t Throttle::take(int64_t)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.479940 common/Throttle.cc: 145: FAILED assert(c >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::take(long)+0x6a) [0x85dc12] 3: (ThrottleTest_take_Test::TestBody()+0x2a3) [0x82a483] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2014-10-08 11:12:50.482663 2ba57dab2c40 -1 common/Throttle.cc: In function 'int64_t Throttle::take(int64_t)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.479940 common/Throttle.cc: 145: FAILED assert(c >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::take(long)+0x6a) [0x85dc12] 3: (ThrottleTest_take_Test::TestBody()+0x2a3) [0x82a483] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- 0> 2014-10-08 11:12:50.482663 2ba57dab2c40 -1 common/Throttle.cc: In function 'int64_t Throttle::take(int64_t)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.479940 common/Throttle.cc: 145: FAILED assert(c >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::take(long)+0x6a) [0x85dc12] 3: (ThrottleTest_take_Test::TestBody()+0x2a3) [0x82a483] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 keyvaluestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs -2/-2 (syslog threshold) 99/99 (stderr threshold) max_recent 500 max_new 1000 log_file --- end dump of recent events --- [ OK ] ThrottleTest.take (3 ms) [ RUN ] ThrottleTest.get common/Throttle.cc: In function 'bool Throttle::get(int64_t, int64_t)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.482908 common/Throttle.cc: 165: FAILED assert(c >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::get(long, long)+0x75) [0x85deb5] 3: (ThrottleTest_get_Test::TestBody()+0x295) [0x82dd35] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2014-10-08 11:12:50.485339 2ba57dab2c40 -1 common/Throttle.cc: In function 'bool Throttle::get(int64_t, int64_t)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.482908 common/Throttle.cc: 165: FAILED assert(c >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::get(long, long)+0x75) [0x85deb5] 3: (ThrottleTest_get_Test::TestBody()+0x295) [0x82dd35] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- 0> 2014-10-08 11:12:50.485339 2ba57dab2c40 -1 common/Throttle.cc: In function 'bool Throttle::get(int64_t, int64_t)' thread 2ba57dab2c40 time 2014-10-08 11:12:50.482908 common/Throttle.cc: 165: FAILED assert(c >= 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x85fa2d] 2: (Throttle::get(long, long)+0x75) [0x85deb5] 3: (ThrottleTest_get_Test::TestBody()+0x295) [0x82dd35] 4: (testing::Test::Run()+0x95) [0x8368af] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x836e15] 6: (testing::TestCase::Run()+0xca) [0x837320] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83b89c] 8: (testing::UnitTest::Run()+0x1c) [0x83a7fe] 9: (main()+0x6c) [0x8291cc] 10: (__libc_start_main()+0xed) [0x2ba57d71376d] 11: ./unittest_throttle() [0x829401] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 keyvaluestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs -2/-2 (syslog threshold) 99/99 (stderr threshold) max_recent 500 max_new 1000 log_file --- end dump of recent events --- Trying (1) with delay 1us Trying (2) with delay 1us [ OK ] ThrottleTest.get (4 ms) [ RUN ] ThrottleTest.get_or_fail [ OK ] ThrottleTest.get_or_fail (0 ms) [ RUN ] ThrottleTest.wait Trying (3) with delay 1us [ OK ] ThrottleTest.wait (0 ms) [ RUN ] ThrottleTest.destructor [ OK ] ThrottleTest.destructor (0 ms) [----------] 6 tests from ThrottleTest (10 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (10 ms total) [ PASSED ] 6 tests. PASS: unittest_throttle 2014-10-08 11:12:50.501461 2b008a75ac40 -1 did not load config file, using default settings. [==========] Running 11 tests from 1 test case. [----------] Global test environment set-up. [----------] 11 tests from CrushWrapper [ RUN ] CrushWrapper.get_immediate_parent [ OK ] CrushWrapper.get_immediate_parent (1 ms) [ RUN ] CrushWrapper.move_bucket [ OK ] CrushWrapper.move_bucket (0 ms) [ RUN ] CrushWrapper.check_item_loc [ OK ] CrushWrapper.check_item_loc (0 ms) [ RUN ] CrushWrapper.update_item [ OK ] CrushWrapper.update_item (0 ms) [ RUN ] CrushWrapper.insert_item [ OK ] CrushWrapper.insert_item (0 ms) [ RUN ] CrushWrapper.item_bucket_names [ OK ] CrushWrapper.item_bucket_names (0 ms) [ RUN ] CrushWrapper.bucket_types [ OK ] CrushWrapper.bucket_types (0 ms) [ RUN ] CrushWrapper.is_valid_crush_name [ OK ] CrushWrapper.is_valid_crush_name (0 ms) [ RUN ] CrushWrapper.is_valid_crush_loc [ OK ] CrushWrapper.is_valid_crush_loc (0 ms) [ RUN ] CrushWrapper.dump_rules [ OK ] CrushWrapper.dump_rules (0 ms) [ RUN ] CrushWrapper.distance [ OK ] CrushWrapper.distance (0 ms) [----------] 11 tests from CrushWrapper (1 ms total) [----------] Global test environment tear-down [==========] 11 tests from 1 test case ran. (1 ms total) [ PASSED ] 11 tests. PASS: unittest_crush_wrapper Running main() from gtest_main.cc [==========] Running 5 tests from 3 test cases. [----------] Global test environment set-up. [----------] 2 tests from RoundTrip [ RUN ] RoundTrip.SimpleRoundTrip [ OK ] RoundTrip.SimpleRoundTrip (0 ms) [ RUN ] RoundTrip.RandomRoundTrips [ OK ] RoundTrip.RandomRoundTrips (15 ms) [----------] 2 tests from RoundTrip (15 ms total) [----------] 1 test from EdgeCase [ RUN ] EdgeCase.EndsInNewline [ OK ] EdgeCase.EndsInNewline (0 ms) [----------] 1 test from EdgeCase (0 ms total) [----------] 2 tests from FuzzEncoding [ RUN ] FuzzEncoding.BadDecode1 [ OK ] FuzzEncoding.BadDecode1 (0 ms) [ RUN ] FuzzEncoding.BadDecode2 [ OK ] FuzzEncoding.BadDecode2 (0 ms) [----------] 2 tests from FuzzEncoding (0 ms total) [----------] Global test environment tear-down [==========] 5 tests from 3 test cases ran. (15 ms total) [ PASSED ] 5 tests. PASS: unittest_base64 Running main() from gtest_main.cc [==========] Running 5 tests from 1 test case. [----------] Global test environment set-up. [----------] 5 tests from CephArgParse [ RUN ] CephArgParse.SimpleArgParse [ OK ] CephArgParse.SimpleArgParse (0 ms) [ RUN ] CephArgParse.DoubleDash [ OK ] CephArgParse.DoubleDash (0 ms) [ RUN ] CephArgParse.WithDashesAndUnderscores [ OK ] CephArgParse.WithDashesAndUnderscores (0 ms) [ RUN ] CephArgParse.WithInt [ OK ] CephArgParse.WithInt (0 ms) [ RUN ] CephArgParse.env_to_vec [ OK ] CephArgParse.env_to_vec (0 ms) [----------] 5 tests from CephArgParse (0 ms total) [----------] Global test environment tear-down [==========] 5 tests from 1 test case ran. (0 ms total) [ PASSED ] 5 tests. PASS: unittest_ceph_argparse Running main() from gtest_main.cc [==========] Running 3 tests from 1 test case. [----------] Global test environment set-up. [----------] 3 tests from CephCompatSet [ RUN ] CephCompatSet.AllSet ./include/CompatSet.h: In function 'void CompatSet::FeatureSet::insert(CompatSet::Feature)' thread 2b63c8f506c0 time 2014-10-08 11:12:51.127613 ./include/CompatSet.h: 38: FAILED assert(f.id > 0) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x466745] 2: (CompatSet::FeatureSet::insert(CompatSet::Feature)+0x36) [0x44028c] 3: (CephCompatSet_AllSet_Test::TestBody()+0x90) [0x43c1a4] 4: (testing::Test::Run()+0x95) [0x4486e7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x448c4d] 6: (testing::TestCase::Run()+0xca) [0x449158] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x44d6d4] 8: (testing::UnitTest::Run()+0x1c) [0x44c636] 9: (main()+0x3e) [0x466612] 10: (__libc_start_main()+0xed) [0x2b63c8bb176d] 11: ./unittest_ceph_compatset() [0x43c059] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. ./include/CompatSet.h: In function 'void CompatSet::FeatureSet::insert(CompatSet::Feature)' thread 2b63c8f506c0 time 2014-10-08 11:12:51.128286 ./include/CompatSet.h: 39: FAILED assert(f.id < 64) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x466745] 2: (CompatSet::FeatureSet::insert(CompatSet::Feature)+0x5c) [0x4402b2] 3: (CephCompatSet_AllSet_Test::TestBody()+0x16c) [0x43c280] 4: (testing::Test::Run()+0x95) [0x4486e7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x448c4d] 6: (testing::TestCase::Run()+0xca) [0x449158] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x44d6d4] 8: (testing::UnitTest::Run()+0x1c) [0x44c636] 9: (main()+0x3e) [0x466612] 10: (__libc_start_main()+0xed) [0x2b63c8bb176d] 11: ./unittest_ceph_compatset() [0x43c059] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] CephCompatSet.AllSet (3 ms) [ RUN ] CephCompatSet.other [ OK ] CephCompatSet.other (0 ms) [ RUN ] CephCompatSet.merge [ OK ] CephCompatSet.merge (0 ms) [----------] 3 tests from CephCompatSet (3 ms total) [----------] Global test environment tear-down [==========] 3 tests from 1 test case ran. (3 ms total) [ PASSED ] 3 tests. PASS: unittest_ceph_compatset Running main() from gtest_main.cc [==========] Running 25 tests from 7 test cases. [----------] Global test environment set-up. [----------] 6 tests from hobject [ RUN ] hobject.prefixes0 [ OK ] hobject.prefixes0 (0 ms) [ RUN ] hobject.prefixes1 [ OK ] hobject.prefixes1 (0 ms) [ RUN ] hobject.prefixes2 [ OK ] hobject.prefixes2 (0 ms) [ RUN ] hobject.prefixes3 [ OK ] hobject.prefixes3 (0 ms) [ RUN ] hobject.prefixes4 [ OK ] hobject.prefixes4 (0 ms) [ RUN ] hobject.prefixes5 [ OK ] hobject.prefixes5 (0 ms) [----------] 6 tests from hobject (0 ms total) [----------] 1 test from pg_interval_t [ RUN ] pg_interval_t.check_new_interval [ OK ] pg_interval_t.check_new_interval (1 ms) [----------] 1 test from pg_interval_t (1 ms total) [----------] 2 tests from pg_t [ RUN ] pg_t.get_ancestor [ OK ] pg_t.get_ancestor (0 ms) [ RUN ] pg_t.split [ OK ] pg_t.split (0 ms) [----------] 2 tests from pg_t (0 ms total) [----------] 12 tests from pg_missing_t [ RUN ] pg_missing_t.constructor [ OK ] pg_missing_t.constructor (0 ms) [ RUN ] pg_missing_t.have_missing [ OK ] pg_missing_t.have_missing (0 ms) [ RUN ] pg_missing_t.swap [ OK ] pg_missing_t.swap (0 ms) [ RUN ] pg_missing_t.is_missing [ OK ] pg_missing_t.is_missing (0 ms) [ RUN ] pg_missing_t.have_old [ OK ] pg_missing_t.have_old (0 ms) [ RUN ] pg_missing_t.add_next_event osd/osd_types.cc: In function 'void pg_missing_t::add_next_event(const pg_log_entry_t&)' thread 2b133b99fc40 time 2014-10-08 11:12:51.146517 osd/osd_types.cc: 3100: FAILED assert(0 == "these don't exist anymore") ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9c57b9] 2: (pg_missing_t::add_next_event(pg_log_entry_t const&)+0x1d1) [0xab2041] 3: (pg_missing_t_add_next_event_Test::TestBody()+0x2c0a) [0x933da4] 4: (testing::Test::Run()+0x95) [0x95fe4b] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x9603b1] 6: (testing::TestCase::Run()+0xca) [0x9608bc] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x964e38] 8: (testing::UnitTest::Run()+0x1c) [0x963d9a] 9: (main()+0x3e) [0x97c9aa] 10: (__libc_start_main()+0xed) [0x2b133b60076d] 11: ./unittest_osd_types() [0x91fc69] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] pg_missing_t.add_next_event (4 ms) [ RUN ] pg_missing_t.revise_need [ OK ] pg_missing_t.revise_need (0 ms) [ RUN ] pg_missing_t.revise_have [ OK ] pg_missing_t.revise_have (0 ms) [ RUN ] pg_missing_t.add [ OK ] pg_missing_t.add (0 ms) [ RUN ] pg_missing_t.rm [ OK ] pg_missing_t.rm (0 ms) [ RUN ] pg_missing_t.got osd/osd_types.cc: In function 'void pg_missing_t::got(const hobject_t&, eversion_t)' thread 2b133b99fc40 time 2014-10-08 11:12:51.150703 osd/osd_types.cc: 3150: FAILED assert(p != missing.end()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9c57b9] 2: (pg_missing_t::got(hobject_t const&, eversion_t)+0x78) [0xab2474] 3: (pg_missing_t_got_Test::TestBody()+0x161) [0x9380b7] 4: (testing::Test::Run()+0x95) [0x95fe4b] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x9603b1] 6: (testing::TestCase::Run()+0xca) [0x9608bc] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x964e38] 8: (testing::UnitTest::Run()+0x1c) [0x963d9a] 9: (main()+0x3e) [0x97c9aa] 10: (__libc_start_main()+0xed) [0x2b133b60076d] 11: ./unittest_osd_types() [0x91fc69] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. osd/osd_types.cc: In function 'void pg_missing_t::got(const hobject_t&, eversion_t)' thread 2b133b99fc40 time 2014-10-08 11:12:51.154534 osd/osd_types.cc: 3151: FAILED assert(p->second.need <= v) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9c57b9] 2: (pg_missing_t::got(hobject_t const&, eversion_t)+0xba) [0xab24b6] 3: (pg_missing_t_got_Test::TestBody()+0x4a6) [0x9383fc] 4: (testing::Test::Run()+0x95) [0x95fe4b] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x9603b1] 6: (testing::TestCase::Run()+0xca) [0x9608bc] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x964e38] 8: (testing::UnitTest::Run()+0x1c) [0x963d9a] 9: (main()+0x3e) [0x97c9aa] 10: (__libc_start_main()+0xed) [0x2b133b60076d] 11: ./unittest_osd_types() [0x91fc69] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] pg_missing_t.got (8 ms) [ RUN ] pg_missing_t.split_into [ OK ] pg_missing_t.split_into (0 ms) [----------] 12 tests from pg_missing_t (12 ms total) [----------] 1 test from ObjectContextTest [ RUN ] ObjectContextTest.read_write_lock Trying (1) with delay 0us Trying (2) with delay 0us Trying (2) with delay 1us Trying (3) with delay 1us Trying (4) with delay 1us Trying (4) with delay 3us Trying (4) with delay 7us Trying (4) with delay 15us Trying (4) with delay 31us Trying (4) with delay 63us Trying (4) with delay 127us Trying (4) with delay 255us Trying (4) with delay 511us Trying (4) with delay 1023us Trying (4) with delay 2047us [ OK ] ObjectContextTest.read_write_lock (5 ms) [----------] 1 test from ObjectContextTest (5 ms total) [----------] 2 tests from pg_pool_t_test [ RUN ] pg_pool_t_test.get_pg_num_divisor [ OK ] pg_pool_t_test.get_pg_num_divisor (0 ms) [ RUN ] pg_pool_t_test.get_random_pg_position 307 1.4e: 178742862 -> 1.4e 184 1.3f: 715281215 -> 1.3f 492 1.b0: 2668698288 -> 1.b0 434 1.ab: 2129104555 -> 1.ab 515 1.1b8: 1471878072 -> 1.1b8 833 1.173: 2585539955 -> 1.173 64 1.1a: 112555034 -> 1.1a 246 1.73: 1169928819 -> 1.73 926 1.321: 2940502817 -> 1.321 351 1.fb: 1991161595 -> 1.fb 201 1.e: 4051655182 -> 1.e 826 1.4e: 1590515790 -> 1.4e 259 1.dc: 1341472476 -> 1.dc 959 1.df: 2066803935 -> 1.df 940 1.36f: 4177277807 -> 1.36f 487 1.144: 4266032964 -> 1.144 2 1.0: 1359875642 -> 1.0 514 1.133: 1797956403 -> 1.133 304 1.68: 707074664 -> 1.68 24 1.f: 1101702687 -> 1.f 874 1.2ad: 2610238125 -> 1.2ad 560 1.1f0: 837319664 -> 1.1f0 700 1.1e6: 3445164006 -> 1.1e6 797 1.133: 1027769651 -> 1.133 810 1.23e: 3022508606 -> 1.23e 329 1.97: 2452458903 -> 1.97 479 1.fc: 1262306556 -> 1.fc 158 1.72: 2206885490 -> 1.72 301 1.ec: 2822332908 -> 1.ec 388 1.f3: 1020492531 -> 1.f3 122 1.73: 852267891 -> 1.73 457 1.59: 2158159449 -> 1.59 281 1.36: 2506102838 -> 1.36 84 1.44: 3894837060 -> 1.44 651 1.b3: 58096819 -> 1.b3 146 1.71: 1434176753 -> 1.71 455 1.fc: 170649340 -> 1.fc 587 1.1b7: 3991788983 -> 1.1b7 193 1.35: 1667420981 -> 1.35 144 1.30: 2657538736 -> 1.30 846 1.c8: 3405682888 -> 1.c8 851 1.305: 4229455621 -> 1.305 633 1.239: 4268325433 -> 1.239 801 1.1d0: 1270151120 -> 1.1d0 313 1.d5: 2162277589 -> 1.d5 814 1.323: 1345906467 -> 1.323 84 1.37: 711196087 -> 1.37 830 1.12d: 307673389 -> 1.12d 564 1.76: 352945270 -> 1.76 590 1.15d: 736272221 -> 1.15d 283 1.2b: 3680146219 -> 1.2b 870 1.75: 2903791733 -> 1.75 556 1.19: 2101897241 -> 1.19 394 1.57: 996215383 -> 1.57 532 1.16e: 228915566 -> 1.16e 131 1.63: 187747555 -> 1.63 816 1.28a: 222228106 -> 1.28a 120 1.67: 1367414247 -> 1.67 201 1.77: 3752968951 -> 1.77 711 1.26b: 3014789739 -> 1.26b 389 1.20: 2574159904 -> 1.20 710 1.1f: 1135325215 -> 1.1f 777 1.284: 1196357252 -> 1.284 222 1.68: 2188704872 -> 1.68 481 1.c1: 3213478593 -> 1.c1 204 1.ba: 2958518714 -> 1.ba 888 1.119: 355771673 -> 1.119 636 1.1c8: 1318040520 -> 1.1c8 737 1.9f: 1445949599 -> 1.9f 32 1.6: 3445605574 -> 1.6 368 1.10a: 1887672074 -> 1.10a 592 1.12f: 4186917679 -> 1.12f 201 1.c4: 871171780 -> 1.c4 531 1.11e: 3488007966 -> 1.11e 31 1.12: 4047211602 -> 1.12 717 1.1fa: 1234826746 -> 1.1fa 50 1.8: 3825441608 -> 1.8 818 1.4b: 2486787147 -> 1.4b 849 1.323: 303242019 -> 1.323 58 1.28: 3971809384 -> 1.28 35 1.18: 1733560568 -> 1.18 414 1.4d: 1767373901 -> 1.4d 416 1.19: 602379289 -> 1.19 245 1.b1: 2002697393 -> 1.b1 645 1.235: 3893834293 -> 1.235 853 1.156: 1188747094 -> 1.156 647 1.189: 1844293001 -> 1.189 194 1.b9: 2688798137 -> 1.b9 613 1.1c6: 4031803846 -> 1.1c6 456 1.ba: 3570793146 -> 1.ba 433 1.67: 2245195879 -> 1.67 637 1.251: 4137236049 -> 1.251 424 1.ea: 4133371370 -> 1.ea 570 1.20c: 897167884 -> 1.20c 2 1.0: 2669538594 -> 1.0 847 1.27a: 1926894202 -> 1.27a 365 1.167: 4057220455 -> 1.167 91 1.56: 1870357974 -> 1.56 365 1.7: 1567036935 -> 1.7 219 1.17: 2575216407 -> 1.17 [ OK ] pg_pool_t_test.get_random_pg_position (1 ms) [----------] 2 tests from pg_pool_t_test (1 ms total) [----------] 1 test from shard_id_t [ RUN ] shard_id_t.iostream [ OK ] shard_id_t.iostream (0 ms) [----------] 1 test from shard_id_t (0 ms total) [----------] Global test environment tear-down [==========] 25 tests from 7 test cases ran. (19 ms total) [ PASSED ] 25 tests. PASS: unittest_osd_types 2014-10-08 11:12:51.181840 2b3af9400a40 -1 did not load config file, using default settings. [==========] Running 13 tests from 1 test case. [----------] Global test environment set-up. [----------] 13 tests from PGLogTest [ RUN ] PGLogTest.rewind_divergent_log osd/PGLog.cc: In function 'void PGLog::rewind_divergent_log(ObjectStore::Transaction&, eversion_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.182411 osd/PGLog.cc: 481: FAILED assert(newhead >= log.tail) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::rewind_divergent_log(ObjectStore::Transaction&, eversion_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x160) [0x911ff6] 3: (PGLogTest_rewind_divergent_log_Test::TestBody()+0x130) [0x8d9810] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2014-10-08 11:12:51.186378 2b3af9400a40 -1 osd/PGLog.cc: In function 'void PGLog::rewind_divergent_log(ObjectStore::Transaction&, eversion_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.182411 osd/PGLog.cc: 481: FAILED assert(newhead >= log.tail) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::rewind_divergent_log(ObjectStore::Transaction&, eversion_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x160) [0x911ff6] 3: (PGLogTest_rewind_divergent_log_Test::TestBody()+0x130) [0x8d9810] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- -14> 2014-10-08 11:12:51.181524 2b3af9400a40 5 asok(0x3faa000) register_command perfcounters_dump hook 0x3f9d1d0 -13> 2014-10-08 11:12:51.181578 2b3af9400a40 5 asok(0x3faa000) register_command 1 hook 0x3f9d1d0 -12> 2014-10-08 11:12:51.181595 2b3af9400a40 5 asok(0x3faa000) register_command perf dump hook 0x3f9d1d0 -11> 2014-10-08 11:12:51.181608 2b3af9400a40 5 asok(0x3faa000) register_command perfcounters_schema hook 0x3f9d1d0 -10> 2014-10-08 11:12:51.181616 2b3af9400a40 5 asok(0x3faa000) register_command 2 hook 0x3f9d1d0 -9> 2014-10-08 11:12:51.181622 2b3af9400a40 5 asok(0x3faa000) register_command perf schema hook 0x3f9d1d0 -8> 2014-10-08 11:12:51.181635 2b3af9400a40 5 asok(0x3faa000) register_command config show hook 0x3f9d1d0 -7> 2014-10-08 11:12:51.181652 2b3af9400a40 5 asok(0x3faa000) register_command config set hook 0x3f9d1d0 -6> 2014-10-08 11:12:51.181661 2b3af9400a40 5 asok(0x3faa000) register_command config get hook 0x3f9d1d0 -5> 2014-10-08 11:12:51.181669 2b3af9400a40 5 asok(0x3faa000) register_command config diff hook 0x3f9d1d0 -4> 2014-10-08 11:12:51.181677 2b3af9400a40 5 asok(0x3faa000) register_command log flush hook 0x3f9d1d0 -3> 2014-10-08 11:12:51.181684 2b3af9400a40 5 asok(0x3faa000) register_command log dump hook 0x3f9d1d0 -2> 2014-10-08 11:12:51.181694 2b3af9400a40 5 asok(0x3faa000) register_command log reopen hook 0x3f9d1d0 -1> 2014-10-08 11:12:51.181840 2b3af9400a40 -1 did not load config file, using default settings. 0> 2014-10-08 11:12:51.186378 2b3af9400a40 -1 osd/PGLog.cc: In function 'void PGLog::rewind_divergent_log(ObjectStore::Transaction&, eversion_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.182411 osd/PGLog.cc: 481: FAILED assert(newhead >= log.tail) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::rewind_divergent_log(ObjectStore::Transaction&, eversion_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x160) [0x911ff6] 3: (PGLogTest_rewind_divergent_log_Test::TestBody()+0x130) [0x8d9810] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 keyvaluestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs -2/-2 (syslog threshold) 99/99 (stderr threshold) max_recent 500 max_new 1000 log_file --- end dump of recent events --- [ OK ] PGLogTest.rewind_divergent_log (4 ms) [ RUN ] PGLogTest.merge_old_entry [ OK ] PGLogTest.merge_old_entry (1 ms) [ RUN ] PGLogTest.merge_log osd/PGLog.cc: In function 'void PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.187217 osd/PGLog.cc: 544: FAILED assert(!log.null() || olog.tail == eversion_t()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x1e5) [0x9126ed] 3: (PGLogTest_merge_log_Test::TestBody()+0x73a9) [0x8ebc33] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2014-10-08 11:12:51.190986 2b3af9400a40 -1 osd/PGLog.cc: In function 'void PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.187217 osd/PGLog.cc: 544: FAILED assert(!log.null() || olog.tail == eversion_t()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x1e5) [0x9126ed] 3: (PGLogTest_merge_log_Test::TestBody()+0x73a9) [0x8ebc33] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- 0> 2014-10-08 11:12:51.190986 2b3af9400a40 -1 osd/PGLog.cc: In function 'void PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.187217 osd/PGLog.cc: 544: FAILED assert(!log.null() || olog.tail == eversion_t()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x1e5) [0x9126ed] 3: (PGLogTest_merge_log_Test::TestBody()+0x73a9) [0x8ebc33] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 keyvaluestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs -2/-2 (syslog threshold) 99/99 (stderr threshold) max_recent 500 max_new 1000 log_file --- end dump of recent events --- osd/PGLog.cc: In function 'void PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.191275 osd/PGLog.cc: 546: FAILED assert(log.head >= olog.tail && olog.head >= log.tail) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x255) [0x91275d] 3: (PGLogTest_merge_log_Test::TestBody()+0x7871) [0x8ec0fb] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2014-10-08 11:12:51.194938 2b3af9400a40 -1 osd/PGLog.cc: In function 'void PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.191275 osd/PGLog.cc: 546: FAILED assert(log.head >= olog.tail && olog.head >= log.tail) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x255) [0x91275d] 3: (PGLogTest_merge_log_Test::TestBody()+0x7871) [0x8ec0fb] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- begin dump of recent events --- 0> 2014-10-08 11:12:51.194938 2b3af9400a40 -1 osd/PGLog.cc: In function 'void PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)' thread 2b3af9400a40 time 2014-10-08 11:12:51.191275 osd/PGLog.cc: 546: FAILED assert(log.head >= olog.tail && olog.head >= log.tail) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x9a8e41] 2: (PGLog::merge_log(ObjectStore::Transaction&, pg_info_t&, pg_log_t&, pg_shard_t, pg_info_t&, PGLog::LogEntryHandler*, bool&, bool&)+0x255) [0x91275d] 3: (PGLogTest_merge_log_Test::TestBody()+0x7871) [0x8ec0fb] 4: (testing::Test::Run()+0x95) [0x983aa7] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x98400d] 6: (testing::TestCase::Run()+0xca) [0x984518] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x988a94] 8: (testing::UnitTest::Run()+0x1c) [0x9879f6] 9: (main()+0x86) [0x8f6c32] 10: (__libc_start_main()+0xed) [0x2b3af8e4876d] 11: ./unittest_pglog() [0x8d9479] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 keyvaluestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs -2/-2 (syslog threshold) 99/99 (stderr threshold) max_recent 500 max_new 1000 log_file --- end dump of recent events --- [ OK ] PGLogTest.merge_log (8 ms) [ RUN ] PGLogTest.proc_replica_log [ OK ] PGLogTest.proc_replica_log (0 ms) [ RUN ] PGLogTest.merge_log_1 [ OK ] PGLogTest.merge_log_1 (0 ms) [ RUN ] PGLogTest.merge_log_2 [ OK ] PGLogTest.merge_log_2 (0 ms) [ RUN ] PGLogTest.merge_log_3 [ OK ] PGLogTest.merge_log_3 (0 ms) [ RUN ] PGLogTest.merge_log_4 [ OK ] PGLogTest.merge_log_4 (0 ms) [ RUN ] PGLogTest.merge_log_5 [ OK ] PGLogTest.merge_log_5 (1 ms) [ RUN ] PGLogTest.merge_log_6 [ OK ] PGLogTest.merge_log_6 (0 ms) [ RUN ] PGLogTest.merge_log_7 [ OK ] PGLogTest.merge_log_7 (0 ms) [ RUN ] PGLogTest.merge_log_8 [ OK ] PGLogTest.merge_log_8 (0 ms) [ RUN ] PGLogTest.merge_log_prior_version_have [ OK ] PGLogTest.merge_log_prior_version_have (0 ms) [----------] 13 tests from PGLogTest (14 ms total) [----------] Global test environment tear-down [==========] 13 tests from 1 test case ran. (14 ms total) [ PASSED ] 13 tests. PASS: unittest_pglog Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from ECUtil [ RUN ] ECUtil.stripe_info_t [ OK ] ECUtil.stripe_info_t (0 ms) [----------] 1 test from ECUtil (0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. PASS: unittest_ecbackend Running main() from gtest_main.cc [==========] Running 12 tests from 3 test cases. [----------] Global test environment set-up. [----------] 6 tests from BloomHitSetTest [ RUN ] BloomHitSetTest.Params [ OK ] BloomHitSetTest.Params (0 ms) [ RUN ] BloomHitSetTest.Construct [ OK ] BloomHitSetTest.Construct (0 ms) [ RUN ] BloomHitSetTest.Rebuild [ OK ] BloomHitSetTest.Rebuild (0 ms) [ RUN ] BloomHitSetTest.InsertsMatch [ OK ] BloomHitSetTest.InsertsMatch (0 ms) [ RUN ] BloomHitSetTest.FillsUp [ OK ] BloomHitSetTest.FillsUp (1 ms) [ RUN ] BloomHitSetTest.RejectsNoMatch [ OK ] BloomHitSetTest.RejectsNoMatch (0 ms) [----------] 6 tests from BloomHitSetTest (1 ms total) [----------] 3 tests from ExplicitHashHitSetTest [ RUN ] ExplicitHashHitSetTest.Construct [ OK ] ExplicitHashHitSetTest.Construct (0 ms) [ RUN ] ExplicitHashHitSetTest.InsertsMatch [ OK ] ExplicitHashHitSetTest.InsertsMatch (0 ms) [ RUN ] ExplicitHashHitSetTest.RejectsNoMatch [ OK ] ExplicitHashHitSetTest.RejectsNoMatch (1 ms) [----------] 3 tests from ExplicitHashHitSetTest (1 ms total) [----------] 3 tests from ExplicitObjectHitSetTest [ RUN ] ExplicitObjectHitSetTest.Construct [ OK ] ExplicitObjectHitSetTest.Construct (0 ms) [ RUN ] ExplicitObjectHitSetTest.InsertsMatch [ OK ] ExplicitObjectHitSetTest.InsertsMatch (0 ms) [ RUN ] ExplicitObjectHitSetTest.RejectsNoMatch [ OK ] ExplicitObjectHitSetTest.RejectsNoMatch (0 ms) [----------] 3 tests from ExplicitObjectHitSetTest (0 ms total) [----------] Global test environment tear-down [==========] 12 tests from 3 test cases ran. (2 ms total) [ PASSED ] 12 tests. PASS: unittest_hitset Running main() from gtest_main.cc [==========] Running 5 tests from 1 test case. [----------] Global test environment set-up. [----------] 5 tests from lru [ RUN ] lru.InsertTop [ OK ] lru.InsertTop (0 ms) [ RUN ] lru.InsertMid [ OK ] lru.InsertMid (0 ms) [ RUN ] lru.InsertBot [ OK ] lru.InsertBot (0 ms) [ RUN ] lru.Adjust [ OK ] lru.Adjust (0 ms) [ RUN ] lru.Pinning [ OK ] lru.Pinning (0 ms) [----------] 5 tests from lru (0 ms total) [----------] Global test environment tear-down [==========] 5 tests from 1 test case ran. (0 ms total) [ PASSED ] 5 tests. PASS: unittest_lru Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from io_priority [ RUN ] io_priority.ceph_ioprio_string_to_class [ OK ] io_priority.ceph_ioprio_string_to_class (0 ms) [----------] 1 test from io_priority (0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. PASS: unittest_io_priority [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from ContextGather [ RUN ] ContextGather.Constructor [ OK ] ContextGather.Constructor (0 ms) [ RUN ] ContextGather.OneSub [ OK ] ContextGather.OneSub (0 ms) [ RUN ] ContextGather.ManySubs [ OK ] ContextGather.ManySubs (0 ms) [ RUN ] ContextGather.AlternatingSubCreateFinish [ OK ] ContextGather.AlternatingSubCreateFinish (0 ms) [----------] 4 tests from ContextGather (0 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (0 ms total) [ PASSED ] 4 tests. PASS: unittest_gather Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from RunCommand [ RUN ] RunCommand.StringSimple [ OK ] RunCommand.StringSimple (7 ms) [----------] 1 test from RunCommand (7 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (7 ms total) [ PASSED ] 1 test. PASS: unittest_run_cmd [==========] Running 6 tests from 3 test cases. [----------] Global test environment set-up. [----------] 2 tests from SignalApi [ RUN ] SignalApi.SimpleInstall [ OK ] SignalApi.SimpleInstall (0 ms) [ RUN ] SignalApi.SimpleInstallAndTest [ OK ] SignalApi.SimpleInstallAndTest (0 ms) [----------] 2 tests from SignalApi (0 ms total) [----------] 1 test from SignalEffects [ RUN ] SignalEffects.ErrnoTest1 [ OK ] SignalEffects.ErrnoTest1 (0 ms) [----------] 1 test from SignalEffects (0 ms total) [----------] 3 tests from SignalHandler [ RUN ] SignalHandler.Single [ OK ] SignalHandler.Single (1000 ms) [ RUN ] SignalHandler.Multiple [ OK ] SignalHandler.Multiple (1000 ms) [ RUN ] SignalHandler.LogInternal [WARNING] ./src/gtest-death-test.cc:741:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads. [ OK ] SignalHandler.LogInternal (1 ms) [----------] 3 tests from SignalHandler (2002 ms total) [----------] Global test environment tear-down [==========] 6 tests from 3 test cases ran. (2002 ms total) [ PASSED ] 6 tests. PASS: unittest_signals Running main() from gtest_main.cc [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from SimpleSpin [ RUN ] SimpleSpin.Test0 [ OK ] SimpleSpin.Test0 (0 ms) [ RUN ] SimpleSpin.Test1 [ OK ] SimpleSpin.Test1 (65 ms) [----------] 2 tests from SimpleSpin (66 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (66 ms total) [ PASSED ] 2 tests. PASS: unittest_simple_spin Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from Librados [ RUN ] Librados.CreateShutdown [ OK ] Librados.CreateShutdown (2 ms) [----------] 1 test from Librados (2 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (3 ms total) [ PASSED ] 1 test. PASS: unittest_librados Running main() from gtest_main.cc [==========] Running 92 tests from 7 test cases. [----------] Global test environment set-up. [----------] 1 test from Buffer [ RUN ] Buffer.constructors [ OK ] Buffer.constructors (1 ms) [----------] 1 test from Buffer (1 ms total) [----------] 1 test from BufferRaw [ RUN ] BufferRaw.ostream [ OK ] BufferRaw.ostream (0 ms) [----------] 1 test from BufferRaw (0 ms total) [----------] 14 tests from TestRawPipe [ RUN ] TestRawPipe.create_zero_copy [ OK ] TestRawPipe.create_zero_copy (1 ms) [ RUN ] TestRawPipe.c_str_no_fd [ OK ] TestRawPipe.c_str_no_fd (1 ms) [ RUN ] TestRawPipe.c_str_basic [ OK ] TestRawPipe.c_str_basic (1 ms) [ RUN ] TestRawPipe.c_str_twice [ OK ] TestRawPipe.c_str_twice (1 ms) [ RUN ] TestRawPipe.c_str_basic_offset [ OK ] TestRawPipe.c_str_basic_offset (1 ms) [ RUN ] TestRawPipe.c_str_dest_short [ OK ] TestRawPipe.c_str_dest_short (1 ms) [ RUN ] TestRawPipe.c_str_source_short [ OK ] TestRawPipe.c_str_source_short (1 ms) [ RUN ] TestRawPipe.c_str_explicit_zero_offset [ OK ] TestRawPipe.c_str_explicit_zero_offset (1 ms) [ RUN ] TestRawPipe.c_str_explicit_positive_offset [ OK ] TestRawPipe.c_str_explicit_positive_offset (1 ms) [ RUN ] TestRawPipe.c_str_explicit_positive_empty_result [ OK ] TestRawPipe.c_str_explicit_positive_empty_result (1 ms) [ RUN ] TestRawPipe.c_str_source_short_explicit_offset [ OK ] TestRawPipe.c_str_source_short_explicit_offset (1 ms) [ RUN ] TestRawPipe.c_str_dest_short_explicit_offset [ OK ] TestRawPipe.c_str_dest_short_explicit_offset (1 ms) [ RUN ] TestRawPipe.buffer_list_read_fd_zero_copy [ OK ] TestRawPipe.buffer_list_read_fd_zero_copy (1 ms) [ RUN ] TestRawPipe.buffer_list_write_fd_zero_copy [ OK ] TestRawPipe.buffer_list_write_fd_zero_copy (2 ms) [----------] 14 tests from TestRawPipe (15 ms total) [----------] 17 tests from BufferPtr [ RUN ] BufferPtr.constructors common/buffer.cc: In function 'ceph::buffer::ptr::ptr(const ceph::buffer::ptr&, unsigned int, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.114130 common/buffer.cc: 573: FAILED assert(o+l <= p._len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned int)+0x73) [0x8aa099] 3: (BufferPtr_constructors_Test::TestBody()+0x1976) [0x850a70] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'ceph::buffer::ptr::ptr(const ceph::buffer::ptr&, unsigned int, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.117149 common/buffer.cc: 574: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned int)+0x9e) [0x8aa0c4] 3: (BufferPtr_constructors_Test::TestBody()+0x1a5b) [0x850b55] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.constructors (6 ms) [ RUN ] BufferPtr.assignment [ OK ] BufferPtr.assignment (0 ms) [ RUN ] BufferPtr.clone [ OK ] BufferPtr.clone (0 ms) [ RUN ] BufferPtr.swap [ OK ] BufferPtr.swap (0 ms) [ RUN ] BufferPtr.release [ OK ] BufferPtr.release (0 ms) [ RUN ] BufferPtr.have_raw [ OK ] BufferPtr.have_raw (0 ms) [ RUN ] BufferPtr.at_buffer_head [ OK ] BufferPtr.at_buffer_head (0 ms) [ RUN ] BufferPtr.at_buffer_tail [ OK ] BufferPtr.at_buffer_tail (0 ms) [ RUN ] BufferPtr.is_n_page_sized [ OK ] BufferPtr.is_n_page_sized (0 ms) [ RUN ] BufferPtr.accessors common/buffer.cc: In function 'char* ceph::buffer::ptr::c_str()' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.120196 common/buffer.cc: 635: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::c_str()+0x37) [0x8aa353] 3: (BufferPtr_accessors_Test::TestBody()+0x26d) [0x853ec7] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'char& ceph::buffer::ptr::operator[](unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.123080 common/buffer.cc: 656: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int)+0x3a) [0x8aa4ac] 3: (BufferPtr_accessors_Test::TestBody()+0x328) [0x853f82] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char* ceph::buffer::ptr::c_str() const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.125852 common/buffer.cc: 629: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::c_str() const+0x37) [0x8aa2d1] 3: (BufferPtr_accessors_Test::TestBody()+0x4d0) [0x85412a] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char& ceph::buffer::ptr::operator[](unsigned int) const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.128804 common/buffer.cc: 650: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int) const+0x3a) [0x8aa416] 3: (BufferPtr_accessors_Test::TestBody()+0x58b) [0x8541e5] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'char& ceph::buffer::ptr::operator[](unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.131686 common/buffer.cc: 657: FAILED assert(n < _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int)+0x65) [0x8aa4d7] 3: (BufferPtr_accessors_Test::TestBody()+0xca7) [0x854901] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char& ceph::buffer::ptr::operator[](unsigned int) const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.134562 common/buffer.cc: 651: FAILED assert(n < _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int) const+0x65) [0x8aa441] 3: (BufferPtr_accessors_Test::TestBody()+0xd62) [0x8549bc] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char* ceph::buffer::ptr::raw_c_str() const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.137423 common/buffer.cc: 661: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::raw_c_str() const+0x37) [0x8aa53f] 3: (BufferPtr_accessors_Test::TestBody()+0xe27) [0x854a81] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'unsigned int ceph::buffer::ptr::raw_length() const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.140278 common/buffer.cc: 662: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::raw_length() const+0x37) [0x8aa583] 3: (BufferPtr_accessors_Test::TestBody()+0xedd) [0x854b37] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'int ceph::buffer::ptr::raw_nref() const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.143148 common/buffer.cc: 663: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::raw_nref() const+0x37) [0x8aa5c7] 3: (BufferPtr_accessors_Test::TestBody()+0xf93) [0x854bed] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.accessors (26 ms) [ RUN ] BufferPtr.cmp [ OK ] BufferPtr.cmp (0 ms) [ RUN ] BufferPtr.is_zero [ OK ] BufferPtr.is_zero (0 ms) [ RUN ] BufferPtr.copy_out ./include/buffer.h: In function 'void ceph::buffer::ptr::copy_out(unsigned int, unsigned int, char*) const' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.146076 ./include/buffer.h: 200: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_out(unsigned int, unsigned int, char*) const+0x3c) [0x8787c0] 3: (BufferPtr_copy_out_Test::TestBody()+0x6b) [0x85679d] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.copy_out (2 ms) [ RUN ] BufferPtr.copy_in common/buffer.cc: In function 'void ceph::buffer::ptr::copy_in(unsigned int, unsigned int, const char*)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.148932 common/buffer.cc: 715: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*)+0x42) [0x8aa8e4] 3: (BufferPtr_copy_in_Test::TestBody()+0x6b) [0x856ced] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::copy_in(unsigned int, unsigned int, const char*)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.151798 common/buffer.cc: 717: FAILED assert(o+l <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*)+0x9f) [0x8aa941] 3: (BufferPtr_copy_in_Test::TestBody()+0x15c) [0x856dde] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::copy_in(unsigned int, unsigned int, const char*)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.154661 common/buffer.cc: 716: FAILED assert(o <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*)+0x6d) [0x8aa90f] 3: (BufferPtr_copy_in_Test::TestBody()+0x21f) [0x856ea1] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.copy_in (9 ms) [ RUN ] BufferPtr.append common/buffer.cc: In function 'void ceph::buffer::ptr::append(char)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.157538 common/buffer.cc: 699: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char)+0x3c) [0x8aa784] 3: (BufferPtr_append_Test::TestBody()+0x52) [0x85732e] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::append(const char*, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.160403 common/buffer.cc: 707: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char const*, unsigned int)+0x3f) [0x8aa827] 3: (BufferPtr_append_Test::TestBody()+0x106) [0x8573e2] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::append(char)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.163245 common/buffer.cc: 700: FAILED assert(1 <= unused_tail_length()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char)+0x6b) [0x8aa7b3] 3: (BufferPtr_append_Test::TestBody()+0x1d8) [0x8574b4] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::append(const char*, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.166204 common/buffer.cc: 708: FAILED assert(l <= unused_tail_length()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char const*, unsigned int)+0x6f) [0x8aa857] 3: (BufferPtr_append_Test::TestBody()+0x28c) [0x857568] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.append (12 ms) [ RUN ] BufferPtr.zero common/buffer.cc: In function 'void ceph::buffer::ptr::zero(unsigned int, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.169079 common/buffer.cc: 730: FAILED assert(o+l <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::zero(unsigned int, unsigned int)+0x45) [0x8aaa0f] 3: (BufferPtr_zero_Test::TestBody()+0xa0) [0x857cfa] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.zero (3 ms) [ RUN ] BufferPtr.ostream [ OK ] BufferPtr.ostream (0 ms) [----------] 17 tests from BufferPtr (58 ms total) [----------] 12 tests from BufferListIterator [ RUN ] BufferListIterator.constructors [ OK ] BufferListIterator.constructors (0 ms) [ RUN ] BufferListIterator.operator_equal [ OK ] BufferListIterator.operator_equal (0 ms) [ RUN ] BufferListIterator.get_off [ OK ] BufferListIterator.get_off (0 ms) [ RUN ] BufferListIterator.get_remaining [ OK ] BufferListIterator.get_remaining (0 ms) [ RUN ] BufferListIterator.end [ OK ] BufferListIterator.end (0 ms) [ RUN ] BufferListIterator.advance [ OK ] BufferListIterator.advance (0 ms) [ RUN ] BufferListIterator.seek [ OK ] BufferListIterator.seek (0 ms) [ RUN ] BufferListIterator.operator_star [ OK ] BufferListIterator.operator_star (0 ms) [ RUN ] BufferListIterator.operator_plus_plus [ OK ] BufferListIterator.operator_plus_plus (0 ms) [ RUN ] BufferListIterator.get_current_ptr [ OK ] BufferListIterator.get_current_ptr (0 ms) [ RUN ] BufferListIterator.copy [ OK ] BufferListIterator.copy (0 ms) [ RUN ] BufferListIterator.copy_in [ OK ] BufferListIterator.copy_in (0 ms) [----------] 12 tests from BufferListIterator (0 ms total) [----------] 46 tests from BufferList [ RUN ] BufferList.constructors [ OK ] BufferList.constructors (0 ms) [ RUN ] BufferList.operator_equal [ OK ] BufferList.operator_equal (0 ms) [ RUN ] BufferList.buffers [ OK ] BufferList.buffers (0 ms) [ RUN ] BufferList.swap [ OK ] BufferList.swap (0 ms) [ RUN ] BufferList.length [ OK ] BufferList.length (0 ms) [ RUN ] BufferList.contents_equal [ OK ] BufferList.contents_equal (0 ms) [ RUN ] BufferList.is_page_aligned [ OK ] BufferList.is_page_aligned (0 ms) [ RUN ] BufferList.is_n_page_sized [ OK ] BufferList.is_n_page_sized (0 ms) [ RUN ] BufferList.is_zero [ OK ] BufferList.is_zero (0 ms) [ RUN ] BufferList.clear [ OK ] BufferList.clear (0 ms) [ RUN ] BufferList.push_front [ OK ] BufferList.push_front (0 ms) [ RUN ] BufferList.push_back [ OK ] BufferList.push_back (0 ms) [ RUN ] BufferList.is_contiguous [ OK ] BufferList.is_contiguous (0 ms) [ RUN ] BufferList.rebuild [ OK ] BufferList.rebuild (0 ms) [ RUN ] BufferList.rebuild_page_aligned [ OK ] BufferList.rebuild_page_aligned (0 ms) [ RUN ] BufferList.claim [ OK ] BufferList.claim (0 ms) [ RUN ] BufferList.claim_append [ OK ] BufferList.claim_append (0 ms) [ RUN ] BufferList.claim_prepend [ OK ] BufferList.claim_prepend (0 ms) [ RUN ] BufferList.begin [ OK ] BufferList.begin (0 ms) [ RUN ] BufferList.end [ OK ] BufferList.end (0 ms) [ RUN ] BufferList.copy [ OK ] BufferList.copy (1 ms) [ RUN ] BufferList.copy_in [ OK ] BufferList.copy_in (0 ms) [ RUN ] BufferList.append common/buffer.cc: In function 'void ceph::buffer::list::append(const ceph::buffer::ptr&, unsigned int, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:55.173104 common/buffer.cc: 1257: FAILED assert(len+off <= bp.length()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsigned int)+0x4f) [0x8acb6d] 3: (BufferList_append_Test::TestBody()+0x1227) [0x86a503] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferList.append (3 ms) [ RUN ] BufferList.append_zero [ OK ] BufferList.append_zero (0 ms) [ RUN ] BufferList.operator_brackets [ OK ] BufferList.operator_brackets (0 ms) [ RUN ] BufferList.c_str [ OK ] BufferList.c_str (0 ms) [ RUN ] BufferList.substr_of [ OK ] BufferList.substr_of (0 ms) [ RUN ] BufferList.splice [ OK ] BufferList.splice (0 ms) [ RUN ] BufferList.write [ OK ] BufferList.write (0 ms) [ RUN ] BufferList.encode_base64 [ OK ] BufferList.encode_base64 (0 ms) [ RUN ] BufferList.decode_base64 [ OK ] BufferList.decode_base64 (0 ms) [ RUN ] BufferList.hexdump [ OK ] BufferList.hexdump (0 ms) [ RUN ] BufferList.read_file [ OK ] BufferList.read_file (8 ms) [ RUN ] BufferList.read_fd [ OK ] BufferList.read_fd (0 ms) [ RUN ] BufferList.write_file bufferlist::write_file(un/like/ly): failed to open file: (2) No such file or directory [ OK ] BufferList.write_file (0 ms) [ RUN ] BufferList.write_fd [ OK ] BufferList.write_fd (2 ms) [ RUN ] BufferList.crc32c [ OK ] BufferList.crc32c (0 ms) [ RUN ] BufferList.crc32c_append [ OK ] BufferList.crc32c_append (10 ms) [ RUN ] BufferList.crc32c_append_perf populating large buffers (a, b=c=d) a.crc32c(0) = 1138817026 at 3124.28 MB/sec a.crc32c(0) (again) = 1138817026 at 1.28e+08 MB/sec a.crc32c(5) = 3239494520 at 20809.6 MB/sec a.crc32c(5) (again) = 3239494520 at 20802.9 MB/sec b.crc32c(0) = 2481791210 at 4072.35 MB/sec b.crc32c(0) (again)= 2481791210 at 1.28e+08 MB/sec ab.crc32c(0) = 2988268779 at 43119.4 MB/sec ac.crc32c(0) = 2988268779 at 9122.98 MB/sec ba.crc32c(0) = 169240695 at 43101.3 MB/sec ba.crc32c(5) = 1265464778 at 21548.8 MB/sec crc cache hits (same start) = 5 crc cache hits (adjusted) = 6 [ OK ] BufferList.crc32c_append_perf (2564 ms) [ RUN ] BufferList.compare [ OK ] BufferList.compare (1 ms) [ RUN ] BufferList.ostream buffer::list(len=6, buffer::ptr(0~3 0x49c4460 in raw 0x49c4460 len 3 nref 1), buffer::ptr(0~3 0x49c4220 in raw 0x49c4220 len 3 nref 1) ) [ OK ] BufferList.ostream (0 ms) [ RUN ] BufferList.zero common/buffer.cc: In function 'void ceph::buffer::list::zero(unsigned int, unsigned int)' thread 2b25a9ee7c40 time 2014-10-08 11:12:57.762261 common/buffer.cc: 1057: FAILED assert(o+l <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::list::zero(unsigned int, unsigned int)+0x47) [0x8abe07] 3: (BufferList_zero_Test::TestBody()+0x45e) [0x874bf2] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b25a9b4876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferList.zero (2 ms) [ RUN ] BufferList.EmptyAppend [ OK ] BufferList.EmptyAppend (0 ms) [ RUN ] BufferList.TestPtrAppend [ OK ] BufferList.TestPtrAppend (15 ms) [ RUN ] BufferList.TestDirectAppend [ OK ] BufferList.TestDirectAppend (14 ms) [ RUN ] BufferList.TestCopyAll [ OK ] BufferList.TestCopyAll (86 ms) [----------] 46 tests from BufferList (2707 ms total) [----------] 1 test from BufferHash [ RUN ] BufferHash.all [ OK ] BufferHash.all (0 ms) [----------] 1 test from BufferHash (0 ms total) [----------] Global test environment tear-down [==========] 92 tests from 7 test cases ran. (2781 ms total) [ PASSED ] 92 tests. PASS: unittest_bufferlist Running main() from gtest_main.cc [==========] Running 7 tests from 1 test case. [----------] Global test environment set-up. [----------] 7 tests from Crc32c [ RUN ] Crc32c.Small [ OK ] Crc32c.Small (0 ms) [ RUN ] Crc32c.PartialWord [ OK ] Crc32c.PartialWord (0 ms) [ RUN ] Crc32c.Big [ OK ] Crc32c.Big (4 ms) [ RUN ] Crc32c.Performance populating large buffer calculating crc best choice = 3861.7 MB/sec best choice 0xffffffff = 3823.1 MB/sec sctp = 1240.44 MB/sec intel baseline = 352.234 MB/sec [ OK ] Crc32c.Performance (7722 ms) [ RUN ] Crc32c.Range [ OK ] Crc32c.Range (0 ms) [ RUN ] Crc32c.RangeZero [ OK ] Crc32c.RangeZero (0 ms) [ RUN ] Crc32c.RangeNull [ OK ] Crc32c.RangeNull (0 ms) [----------] 7 tests from Crc32c (7727 ms total) [----------] Global test environment tear-down [==========] 7 tests from 1 test case ran. (7727 ms total) [ PASSED ] 7 tests. PASS: unittest_crc32c ceph_arch_intel_sse42 = 1 ceph_arch_intel_sse2 = 1 ceph_arch_neon = 0 PASS: unittest_arch [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from AES [ RUN ] AES.ValidateSecret [ OK ] AES.ValidateSecret (0 ms) [ RUN ] AES.Encrypt [ OK ] AES.Encrypt (7 ms) [ RUN ] AES.Decrypt [ OK ] AES.Decrypt (0 ms) [ RUN ] AES.Loop [ OK ] AES.Loop (113 ms) [----------] 4 tests from AES (120 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (120 ms total) [ PASSED ] 4 tests. PASS: unittest_crypto [==========] Running 0 tests from 0 test cases. [==========] 0 tests from 0 test cases ran. (0 ms total) [ PASSED ] 0 tests. PASS: unittest_crypto_init [==========] Running 3 tests from 1 test case. [----------] Global test environment set-up. [----------] 3 tests from PerfCounters [ RUN ] PerfCounters.SimpleTest [ OK ] PerfCounters.SimpleTest (0 ms) [ RUN ] PerfCounters.SinglePerfCounters [ OK ] PerfCounters.SinglePerfCounters (1 ms) [ RUN ] PerfCounters.MultiplePerfCounters [ OK ] PerfCounters.MultiplePerfCounters (2 ms) [----------] 3 tests from PerfCounters (3 ms total) [----------] Global test environment tear-down [==========] 3 tests from 1 test case ran. (3 ms total) [ PASSED ] 3 tests. PASS: unittest_perf_counters 2014-10-08 11:13:06.094065 2acde7e24c40 -1 did not load config file, using default settings. [==========] Running 8 tests from 2 test cases. [----------] Global test environment set-up. [----------] 7 tests from AdminSocket [ RUN ] AdminSocket.Teardown [ OK ] AdminSocket.Teardown (0 ms) [ RUN ] AdminSocket.TeardownSetup [ OK ] AdminSocket.TeardownSetup (0 ms) [ RUN ] AdminSocket.SendHelp [ OK ] AdminSocket.SendHelp (2 ms) [ RUN ] AdminSocket.SendNoOp [ OK ] AdminSocket.SendNoOp (1 ms) [ RUN ] AdminSocket.RegisterCommand [ OK ] AdminSocket.RegisterCommand (4 ms) [ RUN ] AdminSocket.RegisterCommandPrefixes [ OK ] AdminSocket.RegisterCommandPrefixes (2 ms) [ RUN ] AdminSocket.bind_and_listen message: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/tmp/perfcounters_test_socket.3536.1412766786': (17) File exists [ OK ] AdminSocket.bind_and_listen (1 ms) [----------] 7 tests from AdminSocket (10 ms total) [----------] 1 test from AdminSocketClient [ RUN ] AdminSocketClient.Ping [ OK ] AdminSocketClient.Ping (4998 ms) [----------] 1 test from AdminSocketClient (4998 ms total) [----------] Global test environment tear-down [==========] 8 tests from 2 test cases ran. (5008 ms total) [ PASSED ] 8 tests. 2014-10-08 11:13:11.102099 2acde8431700 -1 asok(0x47b78a0) AdminSocket: error writing response length (32) Broken pipe PASS: unittest_admin_socket [==========] Running 7 tests from 3 test cases. [----------] Global test environment set-up. [----------] 1 test from ForkDeathTest [ RUN ] ForkDeathTest.MD5 [WARNING] ./src/gtest-death-test.cc:741:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads. [ OK ] ForkDeathTest.MD5 (3 ms) [----------] 1 test from ForkDeathTest (3 ms total) [----------] 3 tests from MD5 [ RUN ] MD5.Simple [ OK ] MD5.Simple (0 ms) [ RUN ] MD5.MultiUpdate [ OK ] MD5.MultiUpdate (0 ms) [ RUN ] MD5.Restart [ OK ] MD5.Restart (0 ms) [----------] 3 tests from MD5 (0 ms total) [----------] 3 tests from HMACSHA1 [ RUN ] HMACSHA1.Simple [ OK ] HMACSHA1.Simple (1 ms) [ RUN ] HMACSHA1.MultiUpdate [ OK ] HMACSHA1.MultiUpdate (0 ms) [ RUN ] HMACSHA1.Restart [ OK ] HMACSHA1.Restart (0 ms) [----------] 3 tests from HMACSHA1 (2 ms total) [----------] Global test environment tear-down [==========] 7 tests from 3 test cases ran. (5 ms total) [ PASSED ] 7 tests. PASS: unittest_ceph_crypto Running main() from gtest_main.cc [==========] Running 5 tests from 2 test cases. [----------] Global test environment set-up. [----------] 4 tests from IsValidUtf8 [ RUN ] IsValidUtf8.SimpleAscii [ OK ] IsValidUtf8.SimpleAscii (1 ms) [ RUN ] IsValidUtf8.ControlChars [ OK ] IsValidUtf8.ControlChars (0 ms) [ RUN ] IsValidUtf8.SimpleUtf8 [ OK ] IsValidUtf8.SimpleUtf8 (0 ms) [ RUN ] IsValidUtf8.InvalidUtf8 [ OK ] IsValidUtf8.InvalidUtf8 (0 ms) [----------] 4 tests from IsValidUtf8 (1 ms total) [----------] 1 test from HasControlChars [ RUN ] HasControlChars.HasControlChars1 [ OK ] HasControlChars.HasControlChars1 (0 ms) [----------] 1 test from HasControlChars (0 ms total) [----------] Global test environment tear-down [==========] 5 tests from 2 test cases ran. (1 ms total) [ PASSED ] 5 tests. PASS: unittest_utf8 Running main() from gtest_main.cc [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from MimeTests [ RUN ] MimeTests.SimpleEncode [ OK ] MimeTests.SimpleEncode (0 ms) [ RUN ] MimeTests.EncodeOutOfSpace [ OK ] MimeTests.EncodeOutOfSpace (0 ms) [ RUN ] MimeTests.SimpleDecode [ OK ] MimeTests.SimpleDecode (0 ms) [ RUN ] MimeTests.LowercaseDecode [ OK ] MimeTests.LowercaseDecode (0 ms) [ RUN ] MimeTests.DecodeOutOfSpace [ OK ] MimeTests.DecodeOutOfSpace (0 ms) [ RUN ] MimeTests.DecodeErrors [ OK ] MimeTests.DecodeErrors (0 ms) [----------] 6 tests from MimeTests (0 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (0 ms total) [ PASSED ] 6 tests. PASS: unittest_mime Running main() from gtest_main.cc [==========] Running 8 tests from 2 test cases. [----------] Global test environment set-up. [----------] 4 tests from EscapeXml [ RUN ] EscapeXml.PassThrough [ OK ] EscapeXml.PassThrough (0 ms) [ RUN ] EscapeXml.EntityRefs1 [ OK ] EscapeXml.EntityRefs1 (0 ms) [ RUN ] EscapeXml.ControlChars [ OK ] EscapeXml.ControlChars (0 ms) [ RUN ] EscapeXml.Utf8 [ OK ] EscapeXml.Utf8 (0 ms) [----------] 4 tests from EscapeXml (0 ms total) [----------] 4 tests from EscapeJson [ RUN ] EscapeJson.PassThrough [ OK ] EscapeJson.PassThrough (0 ms) [ RUN ] EscapeJson.Escapes1 [ OK ] EscapeJson.Escapes1 (0 ms) [ RUN ] EscapeJson.ControlChars [ OK ] EscapeJson.ControlChars (0 ms) [ RUN ] EscapeJson.Utf8 [ OK ] EscapeJson.Utf8 (0 ms) [----------] 4 tests from EscapeJson (0 ms total) [----------] Global test environment tear-down [==========] 8 tests from 2 test cases ran. (0 ms total) [ PASSED ] 8 tests. PASS: unittest_escape 2014-10-08 11:13:11.149840 2b71ecaffc40 -1 did not load config file, using default settings. [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from chain_xattr [ RUN ] chain_xattr.get_and_set os/chain_xattr.cc: In function 'void get_raw_xattr_name(const char*, int, char*, int)' thread 2b71ecaffc40 time 2014-10-08 11:13:11.153416 os/chain_xattr.cc: 47: FAILED assert(pos < raw_len - 1) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x859435] 2: ./unittest_chain_xattr() [0x82d563] 3: (chain_setxattr(char const*, char const*, void const*, unsigned long)+0x94) [0x82dd44] 4: (chain_xattr_get_and_set_Test::TestBody()+0x24d7) [0x82913b] 5: (testing::Test::Run()+0x95) [0x83337f] 6: (testing::internal::TestInfoImpl::Run()+0xd7) [0x8338e5] 7: (testing::TestCase::Run()+0xca) [0x833df0] 8: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83836c] 9: (testing::UnitTest::Run()+0x1c) [0x8372ce] 10: (main()+0x1d0) [0x82c071] 11: (__libc_start_main()+0xed) [0x2b71ec76076d] 12: ./unittest_chain_xattr() [0x826ba9] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. os/chain_xattr.cc: In function 'void get_raw_xattr_name(const char*, int, char*, int)' thread 2b71ecaffc40 time 2014-10-08 11:13:11.156430 os/chain_xattr.cc: 47: FAILED assert(pos < raw_len - 1) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x859435] 2: ./unittest_chain_xattr() [0x82d563] 3: (chain_fsetxattr(int, char const*, void const*, unsigned long)+0x93) [0x82df06] 4: (chain_xattr_get_and_set_Test::TestBody()+0x25bd) [0x829221] 5: (testing::Test::Run()+0x95) [0x83337f] 6: (testing::internal::TestInfoImpl::Run()+0xd7) [0x8338e5] 7: (testing::TestCase::Run()+0xca) [0x833df0] 8: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x83836c] 9: (testing::UnitTest::Run()+0x1c) [0x8372ce] 10: (main()+0x1d0) [0x82c071] 11: (__libc_start_main()+0xed) [0x2b71ec76076d] 12: ./unittest_chain_xattr() [0x826ba9] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] chain_xattr.get_and_set (9 ms) [ RUN ] chain_xattr.listxattr [ OK ] chain_xattr.listxattr (0 ms) [----------] 2 tests from chain_xattr (9 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (9 ms total) [ PASSED ] 2 tests. PASS: unittest_chain_xattr 2014-10-08 11:13:11.175846 2b10999ccc40 -1 did not load config file, using default settings. [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from FlatIndex [ RUN ] FlatIndex.FlatIndex [ OK ] FlatIndex.FlatIndex (0 ms) [ RUN ] FlatIndex.collection [WARNING] ./src/gtest-death-test.cc:741:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads. [ OK ] FlatIndex.collection (9 ms) [ RUN ] FlatIndex.created_unlink [ OK ] FlatIndex.created_unlink (4 ms) [ RUN ] FlatIndex.collection_list [ OK ] FlatIndex.collection_list (5 ms) [----------] 4 tests from FlatIndex (19 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (19 ms total) [ PASSED ] 4 tests. PASS: unittest_flatindex Running main() from gtest_main.cc [==========] Running 5 tests from 2 test cases. [----------] Global test environment set-up. [----------] 2 tests from StrToL [ RUN ] StrToL.Simple1 [ OK ] StrToL.Simple1 (0 ms) [ RUN ] StrToL.Error1 [ OK ] StrToL.Error1 (0 ms) [----------] 2 tests from StrToL (0 ms total) [----------] 3 tests from SIStrToLL [ RUN ] SIStrToLL.WithUnits [ OK ] SIStrToLL.WithUnits (0 ms) [ RUN ] SIStrToLL.WithoutUnits [ OK ] SIStrToLL.WithoutUnits (0 ms) [ RUN ] SIStrToLL.Error [ OK ] SIStrToLL.Error (0 ms) [----------] 3 tests from SIStrToLL (0 ms total) [----------] Global test environment tear-down [==========] 5 tests from 2 test cases ran. (0 ms total) [ PASSED ] 5 tests. PASS: unittest_strtol Running main() from gtest_main.cc [==========] Running 9 tests from 1 test case. [----------] Global test environment set-up. [----------] 9 tests from ConfUtils [ RUN ] ConfUtils.Whitespace [ OK ] ConfUtils.Whitespace (0 ms) [ RUN ] ConfUtils.ParseFiles0 [ OK ] ConfUtils.ParseFiles0 (0 ms) [ RUN ] ConfUtils.ParseFiles1 [ OK ] ConfUtils.ParseFiles1 (1 ms) [ RUN ] ConfUtils.ReadFiles1 [ OK ] ConfUtils.ReadFiles1 (0 ms) [ RUN ] ConfUtils.ReadFiles2 [ OK ] ConfUtils.ReadFiles2 (1 ms) [ RUN ] ConfUtils.IllegalFiles [ OK ] ConfUtils.IllegalFiles (0 ms) [ RUN ] ConfUtils.EscapingFiles [ OK ] ConfUtils.EscapingFiles (1 ms) [ RUN ] ConfUtils.Overrides [ OK ] ConfUtils.Overrides (11 ms) [ RUN ] ConfUtils.DupKey [ OK ] ConfUtils.DupKey (4 ms) [----------] 9 tests from ConfUtils (18 ms total) [----------] Global test environment tear-down [==========] 9 tests from 1 test case ran. (18 ms total) [ PASSED ] 9 tests. PASS: unittest_confutils Running main() from gtest_main.cc [==========] Running 3 tests from 2 test cases. [----------] Global test environment set-up. [----------] 2 tests from test_md_config_t [ RUN ] test_md_config_t.expand_meta [ OK ] test_md_config_t.expand_meta (1 ms) [ RUN ] test_md_config_t.expand_all_meta [ OK ] test_md_config_t.expand_all_meta (0 ms) [----------] 2 tests from test_md_config_t (1 ms total) [----------] 1 test from md_config_t [ RUN ] md_config_t.set_val [ OK ] md_config_t.set_val (0 ms) [----------] 1 test from md_config_t (0 ms total) [----------] Global test environment tear-down [==========] 3 tests from 2 test cases ran. (1 ms total) [ PASSED ] 3 tests. PASS: unittest_config Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from CephContext [ RUN ] CephContext.do_command [ OK ] CephContext.do_command (1 ms) [----------] 1 test from CephContext (1 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (1 ms total) [ PASSED ] 1 test. PASS: unittest_context [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from HeartbeatMap [ RUN ] HeartbeatMap.Healthy [ OK ] HeartbeatMap.Healthy (0 ms) [ RUN ] HeartbeatMap.Unhealth [ OK ] HeartbeatMap.Unhealth (2000 ms) [----------] 2 tests from HeartbeatMap (2000 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (2000 ms total) [ PASSED ] 2 tests. 2014-10-08 11:13:13.284871 2afb213f0c40 1 heartbeat_map is_healthy 'one' had timed out after 1 PASS: unittest_heartbeatmap [==========] Running 13 tests from 2 test cases. [----------] Global test environment set-up. [----------] 3 tests from JsonFormatter [ RUN ] JsonFormatter.Simple1 [ OK ] JsonFormatter.Simple1 (0 ms) [ RUN ] JsonFormatter.Simple2 [ OK ] JsonFormatter.Simple2 (0 ms) [ RUN ] JsonFormatter.Empty [ OK ] JsonFormatter.Empty (0 ms) [----------] 3 tests from JsonFormatter (0 ms total) [----------] 10 tests from XmlFormatter [ RUN ] XmlFormatter.Simple1 [ OK ] XmlFormatter.Simple1 (0 ms) [ RUN ] XmlFormatter.Simple2 [ OK ] XmlFormatter.Simple2 (0 ms) [ RUN ] XmlFormatter.Empty [ OK ] XmlFormatter.Empty (1 ms) [ RUN ] XmlFormatter.DumpStream1 [ OK ] XmlFormatter.DumpStream1 (0 ms) [ RUN ] XmlFormatter.DumpStream2 [ OK ] XmlFormatter.DumpStream2 (0 ms) [ RUN ] XmlFormatter.DumpStream3 [ OK ] XmlFormatter.DumpStream3 (0 ms) [ RUN ] XmlFormatter.DTD [ OK ] XmlFormatter.DTD (0 ms) [ RUN ] XmlFormatter.Clear [ OK ] XmlFormatter.Clear (0 ms) [ RUN ] XmlFormatter.NamespaceTest [ OK ] XmlFormatter.NamespaceTest (0 ms) [ RUN ] XmlFormatter.DumpFormatNameSpaceTest [ OK ] XmlFormatter.DumpFormatNameSpaceTest (0 ms) [----------] 10 tests from XmlFormatter (1 ms total) [----------] Global test environment tear-down [==========] 13 tests from 2 test cases ran. (1 ms total) [ PASSED ] 13 tests. PASS: unittest_formatter Running main() from gtest_main.cc [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from LibCephConfig [ RUN ] LibCephConfig.SimpleSet [ OK ] LibCephConfig.SimpleSet (3 ms) [ RUN ] LibCephConfig.ArgV [ OK ] LibCephConfig.ArgV (1 ms) [----------] 2 tests from LibCephConfig (4 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (4 ms total) [ PASSED ] 2 tests. PASS: unittest_libcephfs_config 2014-10-08 11:13:13.873368 2abab26a2c40 -1 did not load config file, using default settings. [==========] Running 5 tests from 4 test cases. [----------] Global test environment set-up. [----------] 1 test from TestHASH_INDEX_TAG [ RUN ] TestHASH_INDEX_TAG.generate_and_parse_name [ OK ] TestHASH_INDEX_TAG.generate_and_parse_name (0 ms) [----------] 1 test from TestHASH_INDEX_TAG (1 ms total) [----------] 1 test from TestHASH_INDEX_TAG_2 [ RUN ] TestHASH_INDEX_TAG_2.generate_and_parse_name [ OK ] TestHASH_INDEX_TAG_2.generate_and_parse_name (0 ms) [----------] 1 test from TestHASH_INDEX_TAG_2 (0 ms total) [----------] 1 test from TestHOBJECT_WITH_POOL [ RUN ] TestHOBJECT_WITH_POOL.generate_and_parse_name [ OK ] TestHOBJECT_WITH_POOL.generate_and_parse_name (0 ms) [----------] 1 test from TestHOBJECT_WITH_POOL (0 ms total) [----------] 2 tests from TestLFNIndex [ RUN ] TestLFNIndex.remove_object [ OK ] TestLFNIndex.remove_object (35 ms) [ RUN ] TestLFNIndex.get_mangled_name [ OK ] TestLFNIndex.get_mangled_name (6 ms) [----------] 2 tests from TestLFNIndex (42 ms total) [----------] Global test environment tear-down [==========] 5 tests from 4 test cases ran. (43 ms total) [ PASSED ] 5 tests. PASS: unittest_lfnindex Running main() from gtest_main.cc [==========] Running 3 tests from 1 test case. [----------] Global test environment set-up. [----------] 3 tests from LibRadosConfig [ RUN ] LibRadosConfig.SimpleSet [ OK ] LibRadosConfig.SimpleSet (2 ms) [ RUN ] LibRadosConfig.ArgV [ OK ] LibRadosConfig.ArgV (2 ms) [ RUN ] LibRadosConfig.DebugLevels [ OK ] LibRadosConfig.DebugLevels (1 ms) [----------] 3 tests from LibRadosConfig (5 ms total) [----------] Global test environment tear-down [==========] 3 tests from 1 test case ran. (5 ms total) [ PASSED ] 3 tests. PASS: unittest_librados_config [==========] Running 12 tests from 1 test case. [----------] Global test environment set-up. [----------] 12 tests from DaemonConfig [ RUN ] DaemonConfig.SimpleSet [ OK ] DaemonConfig.SimpleSet (1 ms) [ RUN ] DaemonConfig.Substitution [ OK ] DaemonConfig.Substitution (0 ms) [ RUN ] DaemonConfig.SubstitutionTrailing [ OK ] DaemonConfig.SubstitutionTrailing (0 ms) [ RUN ] DaemonConfig.SubstitutionBraces [ OK ] DaemonConfig.SubstitutionBraces (0 ms) [ RUN ] DaemonConfig.SubstitutionBracesTrailing [ OK ] DaemonConfig.SubstitutionBracesTrailing (0 ms) [ RUN ] DaemonConfig.SubstitutionMultiple [ OK ] DaemonConfig.SubstitutionMultiple (0 ms) [ RUN ] DaemonConfig.ArgV [ OK ] DaemonConfig.ArgV (0 ms) [ RUN ] DaemonConfig.InjectArgs max_open_files = '42' num_client = '56' num_client = '57' [ OK ] DaemonConfig.InjectArgs (1 ms) [ RUN ] DaemonConfig.InjectArgsReject failed to parse arguments: --random-garbage-in-injectargs,26 num_client = '28' You cannot change osd_data using injectargs. num_client = '4' [ OK ] DaemonConfig.InjectArgsReject (2 ms) [ RUN ] DaemonConfig.InjectArgsBooleans log_to_syslog = 'true' num_client = '28' log_to_syslog = 'false' num_client = '28' log_to_syslog = 'true' max_open_files = '40' num_client = '1' Parse error parsing binary flag --log_to_syslog. Expected true or false, but got 'falsey' max_open_files = '42' num_client = '1' [ OK ] DaemonConfig.InjectArgsBooleans (1 ms) [ RUN ] DaemonConfig.InjectArgsLogfile log_file = '/tmp/daemon_config_test.3668' [ OK ] DaemonConfig.InjectArgsLogfile (0 ms) [ RUN ] DaemonConfig.ThreadSafety1 [ OK ] DaemonConfig.ThreadSafety1 (0 ms) [----------] 12 tests from DaemonConfig (5 ms total) [----------] Global test environment tear-down [==========] 12 tests from 1 test case ran. (5 ms total) [ PASSED ] 12 tests. PASS: unittest_daemon_config Running main() from gtest_main.cc [==========] Running 24 tests from 1 test case. [----------] Global test environment set-up. [----------] 24 tests from OSDCap [ RUN ] OSDCap.ParseGood Testing good input: 'allow *' Testing good input: 'allow r' Testing good input: 'allow rwx' Testing good input: 'allow r pool foo ' Testing good input: 'allow r pool=foo' Testing good input: 'allow wx pool taco' Testing good input: 'allow pool foo r' Testing good input: 'allow pool taco wx' Testing good input: 'allow wx pool taco object_prefix obj' Testing good input: 'allow wx pool taco object_prefix obj_with_underscores_and_no_quotes' Testing good input: 'allow pool taco object_prefix obj wx' Testing good input: 'allow pool taco object_prefix obj_with_underscores_and_no_quotes wx' Testing good input: 'allow rwx pool 'weird name'' Testing good input: 'allow rwx pool "weird name with ''s"' Testing good input: 'allow rwx auid 123' Testing good input: 'allow rwx pool foo, allow r pool bar' Testing good input: 'allow rwx pool foo ; allow r pool bar' Testing good input: 'allow rwx pool foo ;allow r pool bar' Testing good input: 'allow rwx pool foo; allow r pool bar' Testing good input: 'allow auid 123 rwx' Testing good input: 'allow pool foo rwx, allow pool bar r' Testing good input: 'allow pool foo.froo.foo rwx, allow pool bar r' Testing good input: 'allow pool foo rwx ; allow pool bar r' Testing good input: 'allow pool foo rwx ;allow pool bar r' Testing good input: 'allow pool foo rwx; allow pool bar r' Testing good input: 'allow pool data rw, allow pool rbd rwx, allow pool images class rbd foo' Testing good input: 'allow class-read' Testing good input: 'allow class-write' Testing good input: 'allow class-read class-write' Testing good input: 'allow r class-read pool foo' Testing good input: 'allow rw class-read class-write pool foo' Testing good input: 'allow r class-read pool foo' Testing good input: 'allow pool bar rwx; allow pool baz r class-read' Testing good input: 'allow class foo' Testing good input: 'allow class clsname "clsthingidon'tunderstand"' Testing good input: ' allow rwx pool foo; allow r pool bar ' Testing good input: ' allow rwx pool foo; allow r pool bar ' Testing good input: ' allow pool foo rwx; allow pool bar r ' Testing good input: ' allow pool foo rwx; allow pool bar r ' Testing good input: ' allow wx pool taco' Testing good input: ' allow wx pool taco ' Testing good input: 'allow r pool foo object_prefix blah ; allow w auid 5' Testing good input: 'allow class-read object_prefix rbd_children, allow pool libvirt-pool-test rwx' Testing good input: 'allow class-read object_prefix rbd-children, allow pool libvirt_pool_test rwx' Testing good input: 'allow pool foo namespace nfoo rwx, allow pool bar namespace=nbar r' Testing good input: 'allow pool foo namespace=nfoo rwx ; allow pool bar namespace=nbar r' Testing good input: 'allow pool foo namespace nfoo rwx ;allow pool bar namespace nbar r' Testing good input: 'allow pool foo namespace=nfoo rwx; allow pool bar namespace nbar object_prefix rbd r' Testing good input: 'allow pool foo namespace="" rwx; allow pool bar namespace='' object_prefix rbd r' Testing good input: 'allow pool foo namespace "" rwx; allow pool bar namespace '' object_prefix rbd r' [ OK ] OSDCap.ParseGood (10 ms) [ RUN ] OSDCap.ParseBad Testing bad input: 'allow r poolfoo' osdcap parse failed, stopped at 'poolfoo' of 'allow r poolfoo' Testing bad input: 'allow r w' osdcap parse failed, stopped at 'w' of 'allow r w' Testing bad input: 'ALLOW r' osdcap parse failed, stopped at 'ALLOW r' of 'ALLOW r' Testing bad input: 'allow rwx,' osdcap parse failed, stopped at ',' of 'allow rwx,' Testing bad input: 'allow rwx x' osdcap parse failed, stopped at 'x' of 'allow rwx x' Testing bad input: 'allow r pool foo r' osdcap parse failed, stopped at 'r' of 'allow r pool foo r' Testing bad input: 'allow wwx pool taco' osdcap parse failed, stopped at 'wx pool taco' of 'allow wwx pool taco' Testing bad input: 'allow wwx pool taco^funny&chars' osdcap parse failed, stopped at 'wx pool taco^funny&chars' of 'allow wwx pool taco^funny&chars' Testing bad input: 'allow rwx pool 'weird name''' osdcap parse failed, stopped at ''' of 'allow rwx pool 'weird name''' Testing bad input: 'allow rwx object_prefix "beforepool" pool weird' osdcap parse failed, stopped at 'pool weird' of 'allow rwx object_prefix "beforepool" pool weird' Testing bad input: 'allow rwx auid 123 pool asdf' osdcap parse failed, stopped at 'pool asdf' of 'allow rwx auid 123 pool asdf' Testing bad input: 'allow xrwx pool foo,, allow r pool bar' osdcap parse failed, stopped at 'rwx pool foo,, allow r pool bar' of 'allow xrwx pool foo,, allow r pool bar' Testing bad input: ';allow rwx pool foo rwx ; allow r pool bar' osdcap parse failed, stopped at ';allow rwx pool foo rwx ; allow r pool bar' of ';allow rwx pool foo rwx ; allow r pool bar' Testing bad input: 'allow rwx pool foo ;allow r pool bar gibberish' osdcap parse failed, stopped at 'gibberish' of 'allow rwx pool foo ;allow r pool bar gibberish' Testing bad input: 'allow rwx auid 123 pool asdf namespace=foo' osdcap parse failed, stopped at 'pool asdf namespace=foo' of 'allow rwx auid 123 pool asdf namespace=foo' Testing bad input: 'allow rwx auid 123 namespace' osdcap parse failed, stopped at 'namespace' of 'allow rwx auid 123 namespace' Testing bad input: 'allow rwx namespace' osdcap parse failed, stopped at 'namespace' of 'allow rwx namespace' Testing bad input: 'allow namespace' osdcap parse failed, stopped at 'allow namespace' of 'allow namespace' Testing bad input: 'allow namespace=foo' osdcap parse failed, stopped at 'allow namespace=foo' of 'allow namespace=foo' Testing bad input: 'allow rwx auid 123 namespace asdf' osdcap parse failed, stopped at 'namespace asdf' of 'allow rwx auid 123 namespace asdf' Testing bad input: 'allow wwx pool ''' osdcap parse failed, stopped at 'wx pool ''' of 'allow wwx pool ''' [ OK ] OSDCap.ParseBad (3 ms) [ RUN ] OSDCap.AllowAll [ OK ] OSDCap.AllowAll (1 ms) [ RUN ] OSDCap.AllowPool [ OK ] OSDCap.AllowPool (0 ms) [ RUN ] OSDCap.AllowPools [ OK ] OSDCap.AllowPools (1 ms) [ RUN ] OSDCap.AllowPools2 [ OK ] OSDCap.AllowPools2 (0 ms) [ RUN ] OSDCap.ObjectPrefix [ OK ] OSDCap.ObjectPrefix (0 ms) [ RUN ] OSDCap.ObjectPoolAndPrefix [ OK ] OSDCap.ObjectPoolAndPrefix (0 ms) [ RUN ] OSDCap.BasicR [ OK ] OSDCap.BasicR (0 ms) [ RUN ] OSDCap.BasicW [ OK ] OSDCap.BasicW (1 ms) [ RUN ] OSDCap.BasicX [ OK ] OSDCap.BasicX (0 ms) [ RUN ] OSDCap.BasicRW [ OK ] OSDCap.BasicRW (0 ms) [ RUN ] OSDCap.BasicRX [ OK ] OSDCap.BasicRX (0 ms) [ RUN ] OSDCap.BasicWX [ OK ] OSDCap.BasicWX (0 ms) [ RUN ] OSDCap.BasicRWX [ OK ] OSDCap.BasicRWX (0 ms) [ RUN ] OSDCap.BasicRWClassRClassW [ OK ] OSDCap.BasicRWClassRClassW (1 ms) [ RUN ] OSDCap.ClassR [ OK ] OSDCap.ClassR (0 ms) [ RUN ] OSDCap.ClassW [ OK ] OSDCap.ClassW (0 ms) [ RUN ] OSDCap.ClassRW [ OK ] OSDCap.ClassRW (0 ms) [ RUN ] OSDCap.BasicRClassR [ OK ] OSDCap.BasicRClassR (0 ms) [ RUN ] OSDCap.PoolClassR [ OK ] OSDCap.PoolClassR (1 ms) [ RUN ] OSDCap.PoolClassRNS [ OK ] OSDCap.PoolClassRNS (0 ms) [ RUN ] OSDCap.NSClassR [ OK ] OSDCap.NSClassR (0 ms) [ RUN ] OSDCap.OutputParsed Testing input 'allow *' Testing input 'allow r' Testing input 'allow rx' Testing input 'allow rwx' Testing input 'allow rw class-read class-write' Testing input 'allow rw class-read' Testing input 'allow rw class-write' Testing input 'allow rwx pool images' Testing input 'allow r pool images' Testing input 'allow pool images rwx' Testing input 'allow pool images r' Testing input 'allow pool images w' Testing input 'allow pool images x' Testing input 'allow r pool images namespace ''' Testing input 'allow r pool images namespace foo' Testing input 'allow r pool images namespace ""' Testing input 'allow r namespace foo' Testing input 'allow pool images r; allow pool rbd rwx' Testing input 'allow pool images r, allow pool rbd rwx' Testing input 'allow class-read object_prefix rbd_children, allow pool libvirt-pool-test rwx' [ OK ] OSDCap.OutputParsed (4 ms) [----------] 24 tests from OSDCap (22 ms total) [----------] Global test environment tear-down [==========] 24 tests from 1 test case ran. (22 ms total) [ PASSED ] 24 tests. PASS: unittest_osd_osdcap Running main() from gtest_main.cc [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from MDSAuthCaps [ RUN ] MDSAuthCaps.ParseGood Testing good input: 'allow * path="/foo"' Testing good input: 'allow * path=/foo' Testing good input: 'allow * path="/foo bar/baz"' Testing good input: 'allow * uid=1' Testing good input: 'allow * path="/foo" uid=1' Testing good input: 'allow *' Testing good input: 'allow r' Testing good input: 'allow rw' [ OK ] MDSAuthCaps.ParseGood (1 ms) [ RUN ] MDSAuthCaps.ParseBad Testing bad input: 'allow r poolfoo' osdcap parse failed, stopped at 'poolfoo' of 'allow r poolfoo' Testing bad input: 'allow r w' osdcap parse failed, stopped at 'w' of 'allow r w' Testing bad input: 'ALLOW r' osdcap parse failed, stopped at 'ALLOW r' of 'ALLOW r' Testing bad input: 'allow w' osdcap parse failed, stopped at 'allow w' of 'allow w' Testing bad input: 'allow rwx,' osdcap parse failed, stopped at 'x,' of 'allow rwx,' Testing bad input: 'allow rwx x' osdcap parse failed, stopped at 'x x' of 'allow rwx x' Testing bad input: 'allow r pool foo r' osdcap parse failed, stopped at 'pool foo r' of 'allow r pool foo r' Testing bad input: 'allow wwx pool taco' osdcap parse failed, stopped at 'allow wwx pool taco' of 'allow wwx pool taco' Testing bad input: 'allow wwx pool taco^funny&chars' osdcap parse failed, stopped at 'allow wwx pool taco^funny&chars' of 'allow wwx pool taco^funny&chars' Testing bad input: 'allow rwx pool 'weird name''' osdcap parse failed, stopped at 'x pool 'weird name''' of 'allow rwx pool 'weird name''' Testing bad input: 'allow rwx object_prefix "beforepool" pool weird' osdcap parse failed, stopped at 'x object_prefix "beforepool" pool weird' of 'allow rwx object_prefix "beforepool" pool weird' Testing bad input: 'allow rwx auid 123 pool asdf' osdcap parse failed, stopped at 'x auid 123 pool asdf' of 'allow rwx auid 123 pool asdf' Testing bad input: 'allow xrwx pool foo,, allow r pool bar' osdcap parse failed, stopped at 'allow xrwx pool foo,, allow r pool bar' of 'allow xrwx pool foo,, allow r pool bar' Testing bad input: ';allow rwx pool foo rwx ; allow r pool bar' osdcap parse failed, stopped at ';allow rwx pool foo rwx ; allow r pool bar' of ';allow rwx pool foo rwx ; allow r pool bar' Testing bad input: 'allow rwx pool foo ;allow r pool bar gibberish' osdcap parse failed, stopped at 'x pool foo ;allow r pool bar gibberish' of 'allow rwx pool foo ;allow r pool bar gibberish' Testing bad input: 'allow rwx auid 123 pool asdf namespace=foo' osdcap parse failed, stopped at 'x auid 123 pool asdf namespace=foo' of 'allow rwx auid 123 pool asdf namespace=foo' Testing bad input: 'allow rwx auid 123 namespace' osdcap parse failed, stopped at 'x auid 123 namespace' of 'allow rwx auid 123 namespace' Testing bad input: 'allow rwx namespace' osdcap parse failed, stopped at 'x namespace' of 'allow rwx namespace' Testing bad input: 'allow namespace' osdcap parse failed, stopped at 'allow namespace' of 'allow namespace' Testing bad input: 'allow namespace=foo' osdcap parse failed, stopped at 'allow namespace=foo' of 'allow namespace=foo' Testing bad input: 'allow rwx auid 123 namespace asdf' osdcap parse failed, stopped at 'x auid 123 namespace asdf' of 'allow rwx auid 123 namespace asdf' Testing bad input: 'allow wwx pool ''' osdcap parse failed, stopped at 'allow wwx pool ''' of 'allow wwx pool ''' [ OK ] MDSAuthCaps.ParseBad (2 ms) [ RUN ] MDSAuthCaps.AllowAll [ OK ] MDSAuthCaps.AllowAll (0 ms) [ RUN ] MDSAuthCaps.AllowUid [ OK ] MDSAuthCaps.AllowUid (0 ms) [ RUN ] MDSAuthCaps.AllowPath [ OK ] MDSAuthCaps.AllowPath (0 ms) [ RUN ] MDSAuthCaps.OutputParsed Testing input 'allow' Testing input 'allow *' Testing input 'allow r' Testing input 'allow rw' Testing input 'allow * uid=1' Testing input 'allow * path=/foo' Testing input 'allow * path="/foo"' Testing input 'allow * path="/foo" uid=1' [ OK ] MDSAuthCaps.OutputParsed (1 ms) [----------] 6 tests from MDSAuthCaps (4 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (4 ms total) [ PASSED ] 6 tests. PASS: unittest_mds_authcap Running main() from gtest_main.cc [==========] Running 5 tests from 1 test case. [----------] Global test environment set-up. [----------] 5 tests from MonCap [ RUN ] MonCap.ParseGood Testing good input: 'allow *' -> allow * Testing good input: 'allow r' -> allow r Testing good input: 'allow rwx' -> allow rwx Testing good input: 'allow r' -> allow r Testing good input: ' allow rwx' -> allow rwx Testing good input: 'allow rwx ' -> allow rwx Testing good input: ' allow rwx ' -> allow rwx Testing good input: ' allow rwx ' -> allow rwx Testing good input: ' allow rwx ' -> allow rwx Testing good input: 'allow service=foo x' -> allow service foo x Testing good input: 'allow service="froo" x' -> allow service froo x Testing good input: 'allow profile osd' -> allow profile osd Testing good input: 'allow profile osd-bootstrap' -> allow profile osd-bootstrap Testing good input: 'allow profile "mds-bootstrap", allow *' -> allow profile mds-bootstrap, allow * Testing good input: 'allow command "a b c"' -> allow command "a b c" Testing good input: 'allow command abc' -> allow command abc Testing good input: 'allow command abc with arg=foo' -> allow command abc with arg=foo Testing good input: 'allow command abc with arg=foo arg2=bar' -> allow command abc with arg=foo arg2=bar Testing good input: 'allow command abc with arg=foo arg2=bar' -> allow command abc with arg=foo arg2=bar Testing good input: 'allow command abc with arg=foo arg2 prefix bar arg3 prefix baz' -> allow command abc with arg=foo arg2 prefix bar arg3 prefix baz Testing good input: 'allow command abc with arg=foo arg2 prefix "bar bingo" arg3 prefix baz' -> allow command abc with arg=foo arg2 prefix "bar bingo" arg3 prefix baz Testing good input: 'allow service foo x' -> allow service foo x Testing good input: 'allow service foo x; allow service bar x' -> allow service foo x, allow service bar x Testing good input: 'allow service foo w ;allow service bar x' -> allow service foo w, allow service bar x Testing good input: 'allow service foo w , allow service bar x' -> allow service foo w, allow service bar x Testing good input: 'allow service foo r , allow service bar x' -> allow service foo r, allow service bar x Testing good input: 'allow service foo_foo r, allow service bar r' -> allow service foo_foo r, allow service bar r Testing good input: 'allow service foo-foo r, allow service bar r' -> allow service foo-foo r, allow service bar r Testing good input: 'allow service " foo " w, allow service bar r' -> allow service " foo " w, allow service bar r Testing good input: 'allow command abc with arg=foo arg2=bar, allow service foo r' -> allow command abc with arg=foo arg2=bar, allow service foo r Testing good input: 'allow command abc.def with arg=foo arg2=bar, allow service foo r' -> allow command "abc.def" with arg=foo arg2=bar, allow service foo r Testing good input: 'allow command "foo bar" with arg="baz"' -> allow command "foo bar" with arg=baz Testing good input: 'allow command "foo bar" with arg="baz.xx"' -> allow command "foo bar" with arg="baz.xx" [ OK ] MonCap.ParseGood (6 ms) [ RUN ] MonCap.ParseIdentity Testing good input: 'allow *' Testing good input: 'allow r' Testing good input: 'allow rwx' Testing good input: 'allow service foo x' Testing good input: 'allow profile osd' Testing good input: 'allow profile osd-bootstrap' Testing good input: 'allow profile mds-bootstrap, allow *' Testing good input: 'allow profile "foo bar", allow *' Testing good input: 'allow command abc' Testing good input: 'allow command "a b c"' Testing good input: 'allow command abc with arg=foo' Testing good input: 'allow command abc with arg=foo arg2=bar' Testing good input: 'allow command abc with arg=foo arg2=bar' Testing good input: 'allow command abc with arg=foo arg2 prefix bar arg3 prefix baz' Testing good input: 'allow command abc with arg=foo arg2 prefix "bar bingo" arg3 prefix baz' Testing good input: 'allow service foo x' Testing good input: 'allow service foo x, allow service bar x' Testing good input: 'allow service foo w, allow service bar x' Testing good input: 'allow service foo r, allow service bar x' Testing good input: 'allow service foo_foo r, allow service bar r' Testing good input: 'allow service foo-foo r, allow service bar r' Testing good input: 'allow service " foo " w, allow service bar r' Testing good input: 'allow command abc with arg=foo arg2=bar, allow service foo r' [ OK ] MonCap.ParseIdentity (9 ms) [ RUN ] MonCap.ParseBad Testing bad input: 'allow r foo' moncap parse failed, stopped at 'foo' of 'allow r foo' Testing bad input: 'allow*' moncap parse failed, stopped at 'allow*' of 'allow*' Testing bad input: 'foo allow *' moncap parse failed, stopped at 'foo allow *' of 'foo allow *' Testing bad input: 'allow profile foo rwx' moncap parse failed, stopped at 'rwx' of 'allow profile foo rwx' Testing bad input: 'allow profile' moncap parse failed, stopped at 'allow profile' of 'allow profile' Testing bad input: 'allow profile foo bar rwx' moncap parse failed, stopped at 'bar rwx' of 'allow profile foo bar rwx' Testing bad input: 'allow service bar' moncap parse failed, stopped at 'allow service bar' of 'allow service bar' Testing bad input: 'allow command baz x' moncap parse failed, stopped at 'x' of 'allow command baz x' Testing bad input: 'allow r w' moncap parse failed, stopped at 'w' of 'allow r w' Testing bad input: 'ALLOW r' moncap parse failed, stopped at 'ALLOW r' of 'ALLOW r' Testing bad input: 'allow rwx,' moncap parse failed, stopped at ',' of 'allow rwx,' Testing bad input: 'allow rwx x' moncap parse failed, stopped at 'x' of 'allow rwx x' Testing bad input: 'allow r pool foo r' moncap parse failed, stopped at 'pool foo r' of 'allow r pool foo r' Testing bad input: 'allow wwx pool taco' moncap parse failed, stopped at 'wx pool taco' of 'allow wwx pool taco' Testing bad input: 'allow wwx pool taco^funny&chars' moncap parse failed, stopped at 'wx pool taco^funny&chars' of 'allow wwx pool taco^funny&chars' Testing bad input: 'allow rwx pool 'weird name''' moncap parse failed, stopped at 'pool 'weird name''' of 'allow rwx pool 'weird name''' Testing bad input: 'allow rwx object_prefix "beforepool" pool weird' moncap parse failed, stopped at 'object_prefix "beforepool" pool weird' of 'allow rwx object_prefix "beforepool" pool weird' Testing bad input: 'allow rwx auid 123 pool asdf' moncap parse failed, stopped at 'auid 123 pool asdf' of 'allow rwx auid 123 pool asdf' Testing bad input: 'allow command foo a prefix b' moncap parse failed, stopped at 'a prefix b' of 'allow command foo a prefix b' Testing bad input: 'allow command foo with a prefixb' moncap parse failed, stopped at 'with a prefixb' of 'allow command foo with a prefixb' Testing bad input: 'allow command foo with a = prefix b' moncap parse failed, stopped at 'with a = prefix b' of 'allow command foo with a = prefix b' Testing bad input: 'allow command foo with a prefix b c' moncap parse failed, stopped at 'c' of 'allow command foo with a prefix b c' [ OK ] MonCap.ParseBad (3 ms) [ RUN ] MonCap.AllowAll [ OK ] MonCap.AllowAll (1 ms) [ RUN ] MonCap.ProfileOSD [ OK ] MonCap.ProfileOSD (0 ms) [----------] 5 tests from MonCap (20 ms total) [----------] Global test environment tear-down [==========] 5 tests from 1 test case ran. (20 ms total) [ PASSED ] 5 tests. PASS: unittest_mon_moncap 2014-10-08 11:13:14.516728 2b2e27330a40 -1 did not load config file, using default settings. [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from pgmap [ RUN ] pgmap.min_last_epoch_clean [ OK ] pgmap.min_last_epoch_clean (0 ms) [----------] 1 test from pgmap (0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. PASS: unittest_mon_pgmap Running main() from gtest_main.cc [==========] Running 30 tests from 1 test case. [----------] Global test environment set-up. [----------] 30 tests from CommonIPAddr [ RUN ] CommonIPAddr.TestNotFound [ OK ] CommonIPAddr.TestNotFound (0 ms) [ RUN ] CommonIPAddr.TestV4_Simple [ OK ] CommonIPAddr.TestV4_Simple (0 ms) [ RUN ] CommonIPAddr.TestV4_Prefix25 [ OK ] CommonIPAddr.TestV4_Prefix25 (0 ms) [ RUN ] CommonIPAddr.TestV4_Prefix16 [ OK ] CommonIPAddr.TestV4_Prefix16 (0 ms) [ RUN ] CommonIPAddr.TestV4_PrefixTooLong [ OK ] CommonIPAddr.TestV4_PrefixTooLong (0 ms) [ RUN ] CommonIPAddr.TestV4_PrefixZero [ OK ] CommonIPAddr.TestV4_PrefixZero (0 ms) [ RUN ] CommonIPAddr.TestV6_Simple [ OK ] CommonIPAddr.TestV6_Simple (0 ms) [ RUN ] CommonIPAddr.TestV6_Prefix57 [ OK ] CommonIPAddr.TestV6_Prefix57 (0 ms) [ RUN ] CommonIPAddr.TestV6_PrefixTooLong [ OK ] CommonIPAddr.TestV6_PrefixTooLong (0 ms) [ RUN ] CommonIPAddr.TestV6_PrefixZero [ OK ] CommonIPAddr.TestV6_PrefixZero (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Empty [ OK ] CommonIPAddr.ParseNetwork_Empty (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_Junk [ OK ] CommonIPAddr.ParseNetwork_Bad_Junk (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_SlashNum [ OK ] CommonIPAddr.ParseNetwork_Bad_SlashNum (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_Slash [ OK ] CommonIPAddr.ParseNetwork_Bad_Slash (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv4 [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv4 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv4Slash [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv4Slash (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv4SlashNegative [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv4SlashNegative (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv4SlashJunk [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv4SlashJunk (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv6 [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv6 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv6Slash [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv6Slash (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv6SlashNegative [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv6SlashNegative (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_Bad_IPv6SlashJunk [ OK ] CommonIPAddr.ParseNetwork_Bad_IPv6SlashJunk (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv4_0 [ OK ] CommonIPAddr.ParseNetwork_IPv4_0 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv4_13 [ OK ] CommonIPAddr.ParseNetwork_IPv4_13 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv4_32 [ OK ] CommonIPAddr.ParseNetwork_IPv4_32 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv4_42 [ OK ] CommonIPAddr.ParseNetwork_IPv4_42 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv6_0 [ OK ] CommonIPAddr.ParseNetwork_IPv6_0 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv6_67 [ OK ] CommonIPAddr.ParseNetwork_IPv6_67 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv6_128 [ OK ] CommonIPAddr.ParseNetwork_IPv6_128 (0 ms) [ RUN ] CommonIPAddr.ParseNetwork_IPv6_9000 [ OK ] CommonIPAddr.ParseNetwork_IPv6_9000 (0 ms) [----------] 30 tests from CommonIPAddr (0 ms total) [----------] Global test environment tear-down [==========] 30 tests from 1 test case ran. (0 ms total) [ PASSED ] 30 tests. PASS: unittest_ipaddr Running main() from gtest_main.cc [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from TextTable [ RUN ] TextTable.Alignment [ OK ] TextTable.Alignment (0 ms) [ RUN ] TextTable.WidenAndClearShrink [ OK ] TextTable.WidenAndClearShrink (0 ms) [ RUN ] TextTable.Indent [ OK ] TextTable.Indent (0 ms) [ RUN ] TextTable.TooManyItems ./common/TextTable.h: In function 'TextTable& TextTable::operator<<(const T&) [with T = char [2], TextTable = TextTable]' thread 2b81d5d646c0 time 2014-10-08 11:13:14.532293 ./common/TextTable.h: 107: FAILED assert(curcol + 1 <= col.size()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x44229d] 2: (TextTable& TextTable::operator<< (char const (&) [2])+0x173) [0x43e521] 3: (TextTable_TooManyItems_Test::TestBody()+0x17b) [0x43daa1] 4: (testing::Test::Run()+0x95) [0x44e307] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x44e86d] 6: (testing::TestCase::Run()+0xca) [0x44ed78] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x4532f4] 8: (testing::UnitTest::Run()+0x1c) [0x452256] 9: (main()+0x3e) [0x46baf2] 10: (__libc_start_main()+0xed) [0x2b81d59c576d] 11: ./unittest_texttable() [0x43cdd9] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] TextTable.TooManyItems (0 ms) [----------] 4 tests from TextTable (0 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (1 ms total) [ PASSED ] 4 tests. PASS: unittest_texttable PASS: unittest_on_exit Running main() from gtest_main.cc [==========] Running 6 tests from 1 test case. [----------] Global test environment set-up. [----------] 6 tests from RBDReplay [ RUN ] RBDReplay.Ser [ OK ] RBDReplay.Ser (0 ms) [ RUN ] RBDReplay.Deser [ OK ] RBDReplay.Deser (0 ms) [ RUN ] RBDReplay.ImageNameMap [ OK ] RBDReplay.ImageNameMap (0 ms) [ RUN ] RBDReplay.rbd_loc_str [ OK ] RBDReplay.rbd_loc_str (0 ms) [ RUN ] RBDReplay.rbd_loc_parse [ OK ] RBDReplay.rbd_loc_parse (0 ms) [ RUN ] RBDReplay.batch_unreachable_from [ OK ] RBDReplay.batch_unreachable_from (0 ms) [----------] 6 tests from RBDReplay (1 ms total) [----------] Global test environment tear-down [==========] 6 tests from 1 test case ran. (1 ms total) [ PASSED ] 6 tests. PASS: unittest_rbd_replay main: 105: setup test-erasure-code setup: 18: local dir=test-erasure-code setup: 19: teardown test-erasure-code teardown: 24: local dir=test-erasure-code teardown: 25: kill_daemons test-erasure-code kill_daemons: 60: local dir=test-erasure-code kkill_daemons: 59: find test-erasure-code kkill_daemons: 59: grep pidfile find: `test-erasure-code': No such file or directory teardown: 26: rm -fr test-erasure-code setup: 20: mkdir test-erasure-code main: 106: local code main: 107: run test-erasure-code run: 22: local dir=test-erasure-code run: 24: export CEPH_ARGS rrun: 25: uuidgen run: 25: CEPH_ARGS+='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none ' run: 26: CEPH_ARGS+='--mon-host=127.0.0.1 ' run: 28: setup test-erasure-code setup: 18: local dir=test-erasure-code setup: 19: teardown test-erasure-code teardown: 24: local dir=test-erasure-code teardown: 25: kill_daemons test-erasure-code kill_daemons: 60: local dir=test-erasure-code kkill_daemons: 59: find test-erasure-code kkill_daemons: 59: grep pidfile teardown: 26: rm -fr test-erasure-code setup: 20: mkdir test-erasure-code run: 29: run_mon test-erasure-code a --public-addr 127.0.0.1 run_mon: 30: local dir=test-erasure-code run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=test-erasure-code/a --run-dir=test-erasure-code/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c4683507-2239-4b95-9a38-2213923eb799 ./ceph-mon: created monfs at test-erasure-code/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=test-erasure-code/a --log-file=test-erasure-code/a/log --mon-cluster-log-file=test-erasure-code/a/log --run-dir=test-erasure-code/a --pid-file=test-erasure-code/a/pidfile --public-addr 127.0.0.1 run: 31: CEPH_ARGS= run: 31: ./ceph --admin-daemon test-erasure-code/a/ceph-mon.a.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} run: 32: grep 'load: jerasure.*lrc' test-erasure-code/a/log 2014-10-08 11:13:15.510116 2b96fc9a5f40 10 load: jerasure load: lrc rrun: 21: seq 0 10 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 0 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=0 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/0 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/0 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/0 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/0' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/0 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/0 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 1312a54a-01a2-4245-8154-658afd27180e DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 1312a54a-01a2-4245-8154-658afd27180e *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/0/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-erasure-code/0/activate.monmap --osd-data test-erasure-code/0 --osd-journal test-erasure-code/0/journal --osd-uuid 1312a54a-01a2-4245-8154-658afd27180e --keyring test-erasure-code/0/keyring 2014-10-08 11:13:16.861732 2b7ce7fffbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:17.054917 2b7ce7fffbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:17.062642 2b7ce7fffbc0 -1 filestore(test-erasure-code/0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:17.104285 2b7ce7fffbc0 -1 created object store test-erasure-code/0 journal test-erasure-code/0/journal for osd.0 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:17.104362 2b7ce7fffbc0 -1 auth: error reading file: test-erasure-code/0/keyring: can't open test-erasure-code/0/keyring: (2) No such file or directory 2014-10-08 11:13:17.104502 2b7ce7fffbc0 -1 created new key in keyring test-erasure-code/0/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.0 -i test-erasure-code/0/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-erasure-code/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-erasure-code/0 --osd-journal=test-erasure-code/0/journal starting osd.0 at :/0 osd_data test-erasure-code/0 test-erasure-code/0/journal rrun_osd: 54: cat test-erasure-code/0/whoami run_osd: 54: '[' 0 = 0 ']' run_osd: 56: ./ceph osd crush create-or-move 0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: grep 'osd.0 up' run_osd: 61: ceph osd dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 3 up_thru 0 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/3929 127.0.0.1:6801/3929 127.0.0.1:6802/3929 127.0.0.1:6803/3929 exists,up 1312a54a-01a2-4245-8154-658afd27180e run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 1 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=1 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/1 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/1 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/1 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/1 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/1' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/1 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/1 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/1 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 1521f0fa-bd64-4114-8e56-308bea5d039c DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 1521f0fa-bd64-4114-8e56-308bea5d039c *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 1 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/1/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap test-erasure-code/1/activate.monmap --osd-data test-erasure-code/1 --osd-journal test-erasure-code/1/journal --osd-uuid 1521f0fa-bd64-4114-8e56-308bea5d039c --keyring test-erasure-code/1/keyring 2014-10-08 11:13:19.741594 2ab4ef3c8bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:19.857024 2ab4ef3c8bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:19.857695 2ab4ef3c8bc0 -1 filestore(test-erasure-code/1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:19.888490 2ab4ef3c8bc0 -1 created object store test-erasure-code/1 journal test-erasure-code/1/journal for osd.1 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:19.888570 2ab4ef3c8bc0 -1 auth: error reading file: test-erasure-code/1/keyring: can't open test-erasure-code/1/keyring: (2) No such file or directory 2014-10-08 11:13:19.888716 2ab4ef3c8bc0 -1 created new key in keyring test-erasure-code/1/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.1 -i test-erasure-code/1/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.1 DEBUG:ceph-disk:ceph osd.1 data dir is ready at test-erasure-code/1 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=1 --osd-data=test-erasure-code/1 --osd-journal=test-erasure-code/1/journal starting osd.1 at :/0 osd_data test-erasure-code/1 test-erasure-code/1/journal rrun_osd: 54: cat test-erasure-code/1/whoami run_osd: 54: '[' 1 = 1 ']' run_osd: 56: ./ceph osd crush create-or-move 1 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.1' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.1 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.1 up in weight 1 up_from 7 up_thru 8 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/4265 127.0.0.1:6805/4265 127.0.0.1:6806/4265 127.0.0.1:6807/4265 exists,up 1521f0fa-bd64-4114-8e56-308bea5d039c run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 2 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=2 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/2 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/2 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/2 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/2 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/2' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/2 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/2 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/2 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 2bd4d712-ea10-4193-94de-5308fda168d9 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 2bd4d712-ea10-4193-94de-5308fda168d9 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 2 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/2/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 2 --monmap test-erasure-code/2/activate.monmap --osd-data test-erasure-code/2 --osd-journal test-erasure-code/2/journal --osd-uuid 2bd4d712-ea10-4193-94de-5308fda168d9 --keyring test-erasure-code/2/keyring 2014-10-08 11:13:22.646367 2ae4f69f3bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:22.694649 2ae4f69f3bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:22.695224 2ae4f69f3bc0 -1 filestore(test-erasure-code/2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:22.723267 2ae4f69f3bc0 -1 created object store test-erasure-code/2 journal test-erasure-code/2/journal for osd.2 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:22.723357 2ae4f69f3bc0 -1 auth: error reading file: test-erasure-code/2/keyring: can't open test-erasure-code/2/keyring: (2) No such file or directory 2014-10-08 11:13:22.723511 2ae4f69f3bc0 -1 created new key in keyring test-erasure-code/2/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.2 -i test-erasure-code/2/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.2 DEBUG:ceph-disk:ceph osd.2 data dir is ready at test-erasure-code/2 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=2 --osd-data=test-erasure-code/2 --osd-journal=test-erasure-code/2/journal starting osd.2 at :/0 osd_data test-erasure-code/2 test-erasure-code/2/journal rrun_osd: 54: cat test-erasure-code/2/whoami run_osd: 54: '[' 2 = 2 ']' run_osd: 56: ./ceph osd crush create-or-move 2 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.2' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.2 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.2 up in weight 1 up_from 11 up_thru 12 down_at 0 last_clean_interval [0,0) 127.0.0.1:6808/4621 127.0.0.1:6809/4621 127.0.0.1:6810/4621 127.0.0.1:6811/4621 exists,up 2bd4d712-ea10-4193-94de-5308fda168d9 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 3 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=3 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/3 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/3 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/3 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/3 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/3' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/3 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/3 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/3 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 68b70aa2-fff0-4876-b6a0-9d6699c81ab4 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 68b70aa2-fff0-4876-b6a0-9d6699c81ab4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 3 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/3/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 3 --monmap test-erasure-code/3/activate.monmap --osd-data test-erasure-code/3 --osd-journal test-erasure-code/3/journal --osd-uuid 68b70aa2-fff0-4876-b6a0-9d6699c81ab4 --keyring test-erasure-code/3/keyring 2014-10-08 11:13:25.286721 2b3280b7fbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:25.353650 2b3280b7fbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:25.354357 2b3280b7fbc0 -1 filestore(test-erasure-code/3) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:25.409843 2b3280b7fbc0 -1 created object store test-erasure-code/3 journal test-erasure-code/3/journal for osd.3 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:25.409932 2b3280b7fbc0 -1 auth: error reading file: test-erasure-code/3/keyring: can't open test-erasure-code/3/keyring: (2) No such file or directory 2014-10-08 11:13:25.410125 2b3280b7fbc0 -1 created new key in keyring test-erasure-code/3/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.3 -i test-erasure-code/3/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.3 DEBUG:ceph-disk:ceph osd.3 data dir is ready at test-erasure-code/3 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=3 --osd-data=test-erasure-code/3 --osd-journal=test-erasure-code/3/journal starting osd.3 at :/0 osd_data test-erasure-code/3 test-erasure-code/3/journal rrun_osd: 54: cat test-erasure-code/3/whoami run_osd: 54: '[' 3 = 3 ']' run_osd: 56: ./ceph osd crush create-or-move 3 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.3' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.3 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.3 up in weight 1 up_from 15 up_thru 16 down_at 0 last_clean_interval [0,0) 127.0.0.1:6812/4997 127.0.0.1:6813/4997 127.0.0.1:6814/4997 127.0.0.1:6815/4997 exists,up 68b70aa2-fff0-4876-b6a0-9d6699c81ab4 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 4 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=4 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/4 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/4 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/4 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/4 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/4' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/4 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/4 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/4 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is fc16f8eb-a5ad-47ac-aa0f-da18343f3269 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise fc16f8eb-a5ad-47ac-aa0f-da18343f3269 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 4 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/4/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 4 --monmap test-erasure-code/4/activate.monmap --osd-data test-erasure-code/4 --osd-journal test-erasure-code/4/journal --osd-uuid fc16f8eb-a5ad-47ac-aa0f-da18343f3269 --keyring test-erasure-code/4/keyring 2014-10-08 11:13:28.255071 2b1e4b162bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:28.291265 2b1e4b162bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:28.291923 2b1e4b162bc0 -1 filestore(test-erasure-code/4) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:28.320622 2b1e4b162bc0 -1 created object store test-erasure-code/4 journal test-erasure-code/4/journal for osd.4 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:28.320710 2b1e4b162bc0 -1 auth: error reading file: test-erasure-code/4/keyring: can't open test-erasure-code/4/keyring: (2) No such file or directory 2014-10-08 11:13:28.320876 2b1e4b162bc0 -1 created new key in keyring test-erasure-code/4/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.4 -i test-erasure-code/4/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.4 DEBUG:ceph-disk:ceph osd.4 data dir is ready at test-erasure-code/4 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=4 --osd-data=test-erasure-code/4 --osd-journal=test-erasure-code/4/journal starting osd.4 at :/0 osd_data test-erasure-code/4 test-erasure-code/4/journal rrun_osd: 54: cat test-erasure-code/4/whoami run_osd: 54: '[' 4 = 4 ']' run_osd: 56: ./ceph osd crush create-or-move 4 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.4' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.4 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.4 up in weight 1 up_from 20 up_thru 21 down_at 0 last_clean_interval [0,0) 127.0.0.1:6816/5398 127.0.0.1:6817/5398 127.0.0.1:6818/5398 127.0.0.1:6819/5398 exists,up fc16f8eb-a5ad-47ac-aa0f-da18343f3269 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 5 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=5 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/5 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/5 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/5 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/5 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/5' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/5 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/5 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/5 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 3f825f8e-e13d-4c6a-aa41-7d01794f1d62 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 3f825f8e-e13d-4c6a-aa41-7d01794f1d62 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 5 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/5/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 5 --monmap test-erasure-code/5/activate.monmap --osd-data test-erasure-code/5 --osd-journal test-erasure-code/5/journal --osd-uuid 3f825f8e-e13d-4c6a-aa41-7d01794f1d62 --keyring test-erasure-code/5/keyring 2014-10-08 11:13:31.118800 2b918aa9bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:31.154046 2b918aa9bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:31.154687 2b918aa9bbc0 -1 filestore(test-erasure-code/5) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:31.183404 2b918aa9bbc0 -1 created object store test-erasure-code/5 journal test-erasure-code/5/journal for osd.5 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:31.183488 2b918aa9bbc0 -1 auth: error reading file: test-erasure-code/5/keyring: can't open test-erasure-code/5/keyring: (2) No such file or directory 2014-10-08 11:13:31.183639 2b918aa9bbc0 -1 created new key in keyring test-erasure-code/5/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.5 -i test-erasure-code/5/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.5 DEBUG:ceph-disk:ceph osd.5 data dir is ready at test-erasure-code/5 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=5 --osd-data=test-erasure-code/5 --osd-journal=test-erasure-code/5/journal starting osd.5 at :/0 osd_data test-erasure-code/5 test-erasure-code/5/journal rrun_osd: 54: cat test-erasure-code/5/whoami run_osd: 54: '[' 5 = 5 ']' run_osd: 56: ./ceph osd crush create-or-move 5 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.5' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.5 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.5 up in weight 1 up_from 24 up_thru 25 down_at 0 last_clean_interval [0,0) 127.0.0.1:6820/5814 127.0.0.1:6821/5814 127.0.0.1:6822/5814 127.0.0.1:6823/5814 exists,up 3f825f8e-e13d-4c6a-aa41-7d01794f1d62 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 6 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=6 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/6 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/6 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/6 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/6 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/6' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/6 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/6 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/6 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is fa62ffe0-6a4c-45d8-9dc5-1e16d2e715eb DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise fa62ffe0-6a4c-45d8-9dc5-1e16d2e715eb *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 6 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/6/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 6 --monmap test-erasure-code/6/activate.monmap --osd-data test-erasure-code/6 --osd-journal test-erasure-code/6/journal --osd-uuid fa62ffe0-6a4c-45d8-9dc5-1e16d2e715eb --keyring test-erasure-code/6/keyring 2014-10-08 11:13:33.982643 2b4801e4ebc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:34.012579 2b4801e4ebc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:34.013342 2b4801e4ebc0 -1 filestore(test-erasure-code/6) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:34.077206 2b4801e4ebc0 -1 created object store test-erasure-code/6 journal test-erasure-code/6/journal for osd.6 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:34.077291 2b4801e4ebc0 -1 auth: error reading file: test-erasure-code/6/keyring: can't open test-erasure-code/6/keyring: (2) No such file or directory 2014-10-08 11:13:34.077461 2b4801e4ebc0 -1 created new key in keyring test-erasure-code/6/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.6 -i test-erasure-code/6/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.6 DEBUG:ceph-disk:ceph osd.6 data dir is ready at test-erasure-code/6 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=6 --osd-data=test-erasure-code/6 --osd-journal=test-erasure-code/6/journal starting osd.6 at :/0 osd_data test-erasure-code/6 test-erasure-code/6/journal rrun_osd: 54: cat test-erasure-code/6/whoami run_osd: 54: '[' 6 = 6 ']' run_osd: 56: ./ceph osd crush create-or-move 6 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.6' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: grep 'osd.6 up' run_osd: 61: ceph osd dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.6 up in weight 1 up_from 28 up_thru 29 down_at 0 last_clean_interval [0,0) 127.0.0.1:6824/6251 127.0.0.1:6825/6251 127.0.0.1:6826/6251 127.0.0.1:6827/6251 exists,up fa62ffe0-6a4c-45d8-9dc5-1e16d2e715eb run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 7 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=7 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/7 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/7 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/7 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/7 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/7' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/7 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/7 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/7 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is dd2875d5-f37d-422b-b826-10ef1939ed59 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise dd2875d5-f37d-422b-b826-10ef1939ed59 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 7 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/7/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 7 --monmap test-erasure-code/7/activate.monmap --osd-data test-erasure-code/7 --osd-journal test-erasure-code/7/journal --osd-uuid dd2875d5-f37d-422b-b826-10ef1939ed59 --keyring test-erasure-code/7/keyring 2014-10-08 11:13:36.754992 2b563350cbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:36.827545 2b563350cbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:36.828127 2b563350cbc0 -1 filestore(test-erasure-code/7) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:36.856649 2b563350cbc0 -1 created object store test-erasure-code/7 journal test-erasure-code/7/journal for osd.7 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:36.856740 2b563350cbc0 -1 auth: error reading file: test-erasure-code/7/keyring: can't open test-erasure-code/7/keyring: (2) No such file or directory 2014-10-08 11:13:36.856892 2b563350cbc0 -1 created new key in keyring test-erasure-code/7/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.7 -i test-erasure-code/7/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.7 DEBUG:ceph-disk:ceph osd.7 data dir is ready at test-erasure-code/7 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=7 --osd-data=test-erasure-code/7 --osd-journal=test-erasure-code/7/journal starting osd.7 at :/0 osd_data test-erasure-code/7 test-erasure-code/7/journal rrun_osd: 54: cat test-erasure-code/7/whoami run_osd: 54: '[' 7 = 7 ']' run_osd: 56: ./ceph osd crush create-or-move 7 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.7' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.7 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.7 up in weight 1 up_from 32 up_thru 33 down_at 0 last_clean_interval [0,0) 127.0.0.1:6828/6707 127.0.0.1:6829/6707 127.0.0.1:6830/6707 127.0.0.1:6831/6707 exists,up dd2875d5-f37d-422b-b826-10ef1939ed59 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 8 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=8 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/8 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/8 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/8 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/8 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/8' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/8 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/8 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/8 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is aa980194-7048-4fb4-96a3-630706568640 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise aa980194-7048-4fb4-96a3-630706568640 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 8 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/8/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 8 --monmap test-erasure-code/8/activate.monmap --osd-data test-erasure-code/8 --osd-journal test-erasure-code/8/journal --osd-uuid aa980194-7048-4fb4-96a3-630706568640 --keyring test-erasure-code/8/keyring 2014-10-08 11:13:40.059100 2b6f36eeebc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:40.167340 2b6f36eeebc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:40.169676 2b6f36eeebc0 -1 filestore(test-erasure-code/8) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:40.276076 2b6f36eeebc0 -1 created object store test-erasure-code/8 journal test-erasure-code/8/journal for osd.8 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:40.276188 2b6f36eeebc0 -1 auth: error reading file: test-erasure-code/8/keyring: can't open test-erasure-code/8/keyring: (2) No such file or directory 2014-10-08 11:13:40.276390 2b6f36eeebc0 -1 created new key in keyring test-erasure-code/8/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.8 -i test-erasure-code/8/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.8 DEBUG:ceph-disk:ceph osd.8 data dir is ready at test-erasure-code/8 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=8 --osd-data=test-erasure-code/8 --osd-journal=test-erasure-code/8/journal starting osd.8 at :/0 osd_data test-erasure-code/8 test-erasure-code/8/journal rrun_osd: 54: cat test-erasure-code/8/whoami run_osd: 54: '[' 8 = 8 ']' run_osd: 56: ./ceph osd crush create-or-move 8 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.8' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.8 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.8 up in weight 1 up_from 36 up_thru 37 down_at 0 last_clean_interval [0,0) 127.0.0.1:6832/7237 127.0.0.1:6833/7237 127.0.0.1:6834/7237 127.0.0.1:6835/7237 exists,up aa980194-7048-4fb4-96a3-630706568640 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 9 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=9 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/9 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/9 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/9 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/9 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/9' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/9 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/9 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/9 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is dbf3e050-5c36-4bee-98bf-30e1fae4dc21 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise dbf3e050-5c36-4bee-98bf-30e1fae4dc21 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 9 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/9/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 9 --monmap test-erasure-code/9/activate.monmap --osd-data test-erasure-code/9 --osd-journal test-erasure-code/9/journal --osd-uuid dbf3e050-5c36-4bee-98bf-30e1fae4dc21 --keyring test-erasure-code/9/keyring 2014-10-08 11:13:43.088387 2b262e0b2bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:43.117518 2b262e0b2bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:43.118121 2b262e0b2bc0 -1 filestore(test-erasure-code/9) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:43.145987 2b262e0b2bc0 -1 created object store test-erasure-code/9 journal test-erasure-code/9/journal for osd.9 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:43.146074 2b262e0b2bc0 -1 auth: error reading file: test-erasure-code/9/keyring: can't open test-erasure-code/9/keyring: (2) No such file or directory 2014-10-08 11:13:43.146233 2b262e0b2bc0 -1 created new key in keyring test-erasure-code/9/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.9 -i test-erasure-code/9/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.9 DEBUG:ceph-disk:ceph osd.9 data dir is ready at test-erasure-code/9 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=9 --osd-data=test-erasure-code/9 --osd-journal=test-erasure-code/9/journal starting osd.9 at :/0 osd_data test-erasure-code/9 test-erasure-code/9/journal rrun_osd: 54: cat test-erasure-code/9/whoami run_osd: 54: '[' 9 = 9 ']' run_osd: 56: ./ceph osd crush create-or-move 9 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.9' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.9 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.9 up in weight 1 up_from 40 up_thru 41 down_at 0 last_clean_interval [0,0) 127.0.0.1:6836/7750 127.0.0.1:6837/7750 127.0.0.1:6838/7750 127.0.0.1:6839/7750 exists,up dbf3e050-5c36-4bee-98bf-30e1fae4dc21 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 33: for id in '$(seq 0 10)' run: 34: run_osd test-erasure-code 10 run_osd: 19: local dir=test-erasure-code run_osd: 20: shift run_osd: 21: local id=10 run_osd: 22: shift run_osd: 23: local osd_data=test-erasure-code/10 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=test-erasure-code' run_osd: 27: ceph_disk_args+=' --sysconfdir=test-erasure-code' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch test-erasure-code/ceph.conf run_osd: 33: mkdir -p test-erasure-code/10 run_osd: 34: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/10 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/10 run_osd: 37: local 'ceph_args=--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=test-erasure-code/10' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=test-erasure-code' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=test-erasure-code/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p test-erasure-code/10 run_osd: 49: CEPH_ARGS='--fsid=c4683507-2239-4b95-9a38-2213923eb799 --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=test-erasure-code/10 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/10 DEBUG:ceph-disk:Cluster uuid is c4683507-2239-4b95-9a38-2213923eb799 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 0002ecd9-5414-4ff1-b091-0ca08b0f2516 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 0002ecd9-5414-4ff1-b091-0ca08b0f2516 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 10 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/10/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 10 --monmap test-erasure-code/10/activate.monmap --osd-data test-erasure-code/10 --osd-journal test-erasure-code/10/journal --osd-uuid 0002ecd9-5414-4ff1-b091-0ca08b0f2516 --keyring test-erasure-code/10/keyring 2014-10-08 11:13:46.117784 2b3da6568bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:46.162863 2b3da6568bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:13:46.163739 2b3da6568bc0 -1 filestore(test-erasure-code/10) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:13:46.198921 2b3da6568bc0 -1 created object store test-erasure-code/10 journal test-erasure-code/10/journal for osd.10 fsid c4683507-2239-4b95-9a38-2213923eb799 2014-10-08 11:13:46.199007 2b3da6568bc0 -1 auth: error reading file: test-erasure-code/10/keyring: can't open test-erasure-code/10/keyring: (2) No such file or directory 2014-10-08 11:13:46.199148 2b3da6568bc0 -1 created new key in keyring test-erasure-code/10/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.10 -i test-erasure-code/10/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.10 DEBUG:ceph-disk:ceph osd.10 data dir is ready at test-erasure-code/10 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=10 --osd-data=test-erasure-code/10 --osd-journal=test-erasure-code/10/journal starting osd.10 at :/0 osd_data test-erasure-code/10 test-erasure-code/10/journal rrun_osd: 54: cat test-erasure-code/10/whoami run_osd: 54: '[' 10 = 10 ']' run_osd: 56: ./ceph osd crush create-or-move 10 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.10' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.10 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.10 up in weight 1 up_from 44 up_thru 0 down_at 0 last_clean_interval [0,0) 127.0.0.1:6840/8262 127.0.0.1:6841/8262 127.0.0.1:6842/8262 127.0.0.1:6843/8262 exists,up 0002ecd9-5414-4ff1-b091-0ca08b0f2516 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 run: 37: CEPH_ARGS= run: 37: ./ceph --admin-daemon test-erasure-code/ceph-osd.0.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} run: 38: grep 'load: jerasure.*lrc' test-erasure-code/osd-0.log 2014-10-08 11:13:17.653139 2b6e2800abc0 10 load: jerasure load: lrc run: 39: create_erasure_coded_pool ecpool create_erasure_coded_pool: 52: local poolname=ecpool create_erasure_coded_pool: 54: ./ceph osd erasure-code-profile set myprofile ruleset-failure-domain=osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create_erasure_coded_pool: 56: ./ceph osd pool create ecpool 12 12 erasure myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'ecpool' created rrun: 40: set rrun: 40: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' run: 40: FUNCTIONS='TEST_alignment_constraints TEST_chunk_mapping TEST_rados_put_get_isa TEST_rados_put_get_jerasure TEST_rados_put_get_lrc_advanced TEST_rados_put_get_lrc_kml' run: 41: for TEST_function in '$FUNCTIONS' run: 42: TEST_alignment_constraints test-erasure-code TEST_alignment_constraints: 192: local payload=ABC TEST_alignment_constraints: 193: echo ABC TTEST_alignment_constraints: 199: ./ceph-conf --show-config-value osd_pool_erasure_code_stripe_width TEST_alignment_constraints: 199: local stripe_width=4096 TEST_alignment_constraints: 200: local block_size=4095 TEST_alignment_constraints: 201: dd if=/dev/zero of=test-erasure-code/ORIGINAL bs=4095 count=2 2+0 records in 2+0 records out 8190 bytes (8.2 kB) copied, 0.000173165 s, 47.3 MB/s TEST_alignment_constraints: 202: ./rados --block-size=4095 --pool ecpool put UNALIGNED test-erasure-code/ORIGINAL INFO: op_size has been rounded to 4096 TEST_alignment_constraints: 204: rm test-erasure-code/ORIGINAL run: 41: for TEST_function in '$FUNCTIONS' run: 42: TEST_chunk_mapping test-erasure-code TEST_chunk_mapping: 244: local dir=test-erasure-code TEST_chunk_mapping: 251: verify_chunk_mapping test-erasure-code ecpool 0 1 verify_chunk_mapping: 221: local dir=test-erasure-code verify_chunk_mapping: 222: local poolname=ecpool verify_chunk_mapping: 223: local first=0 verify_chunk_mapping: 224: local second=1 vvverify_chunk_mapping: 226: chunk_size cccchunk_size: 208: ./ceph-conf --show-config-value osd_pool_erasure_code_stripe_width ccchunk_size: 208: local stripe_width=4096 cccchunk_size: 209: ./ceph osd erasure-code-profile get default cccchunk_size: 209: grep k= *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** ccchunk_size: 209: eval local k=2 cccchunk_size: 209: local k=2 ccchunk_size: 210: echo 2048 vverify_chunk_mapping: 226: printf '%*s' 2048 FIRSTecpool vvverify_chunk_mapping: 226: chunk_size cccchunk_size: 208: ./ceph-conf --show-config-value osd_pool_erasure_code_stripe_width ccchunk_size: 208: local stripe_width=4096 cccchunk_size: 209: ./ceph osd erasure-code-profile get default cccchunk_size: 209: grep k= *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** ccchunk_size: 209: eval local k=2 cccchunk_size: 209: local k=2 ccchunk_size: 210: echo 2048 vverify_chunk_mapping: 226: printf '%*s' 2048 SECONDecpool verify_chunk_mapping: 226: local 'payload= FIRSTecpool SECONDecpool' verify_chunk_mapping: 227: echo -n ' FIRSTecpool SECONDecpool' verify_chunk_mapping: 229: ./rados --pool ecpool put SOMETHINGecpool test-erasure-code/ORIGINAL verify_chunk_mapping: 230: ./rados --pool ecpool get SOMETHINGecpool test-erasure-code/COPY verify_chunk_mapping: 231: osds=($(get_osds $poolname SOMETHING$poolname)) vverify_chunk_mapping: 231: get_osds ecpool SOMETHINGecpool gget_osds: 73: local poolname=ecpool gget_osds: 74: local objectname=SOMETHINGecpool gget_osds: 76: ./ceph osd map ecpool SOMETHINGecpool gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** verify_chunk_mapping: 231: local -a osds verify_chunk_mapping: 232: (( i = 0 )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 233: ./ceph daemon osd.8 flush_journal *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** admin_socket: exception getting command descriptions: [Errno 2] No such file or directory verify_chunk_mapping: 232: (( i++ )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 233: ./ceph daemon osd.0 flush_journal *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** admin_socket: exception getting command descriptions: [Errno 2] No such file or directory verify_chunk_mapping: 232: (( i++ )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 233: ./ceph daemon osd.10 flush_journal *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** admin_socket: exception getting command descriptions: [Errno 2] No such file or directory verify_chunk_mapping: 232: (( i++ )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 235: diff test-erasure-code/ORIGINAL test-erasure-code/COPY verify_chunk_mapping: 236: rm test-erasure-code/COPY verify_chunk_mapping: 238: osds=($(get_osds $poolname SOMETHING$poolname)) vverify_chunk_mapping: 238: get_osds ecpool SOMETHINGecpool gget_osds: 73: local poolname=ecpool gget_osds: 74: local objectname=SOMETHINGecpool gget_osds: 76: ./ceph osd map ecpool SOMETHINGecpool gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** verify_chunk_mapping: 238: local -a osds verify_chunk_mapping: 239: grep --quiet --recursive --text FIRSTecpool test-erasure-code/8 verify_chunk_mapping: 240: grep --quiet --recursive --text SECONDecpool test-erasure-code/0 TEST_chunk_mapping: 253: ./ceph osd erasure-code-profile set remap-profile plugin=lrc 'layers=[ [ "_DD", "" ] ]' mapping=_DD 'ruleset-steps=[ [ "choose", "osd", 0 ] ]' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_chunk_mapping: 258: ./ceph osd erasure-code-profile get remap-profile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** directory=.libs layers=[ [ "_DD", "" ] ] mapping=_DD plugin=lrc ruleset-steps=[ [ "choose", "osd", 0 ] ] TEST_chunk_mapping: 259: ./ceph osd pool create remap-pool 12 12 erasure remap-profile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'remap-pool' created TEST_chunk_mapping: 267: verify_chunk_mapping test-erasure-code remap-pool 1 2 verify_chunk_mapping: 221: local dir=test-erasure-code verify_chunk_mapping: 222: local poolname=remap-pool verify_chunk_mapping: 223: local first=1 verify_chunk_mapping: 224: local second=2 vvverify_chunk_mapping: 226: chunk_size cccchunk_size: 208: ./ceph-conf --show-config-value osd_pool_erasure_code_stripe_width ccchunk_size: 208: local stripe_width=4096 cccchunk_size: 209: grep k= cccchunk_size: 209: ./ceph osd erasure-code-profile get default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** ccchunk_size: 209: eval local k=2 cccchunk_size: 209: local k=2 ccchunk_size: 210: echo 2048 vverify_chunk_mapping: 226: printf '%*s' 2048 FIRSTremap-pool vvverify_chunk_mapping: 226: chunk_size cccchunk_size: 208: ./ceph-conf --show-config-value osd_pool_erasure_code_stripe_width ccchunk_size: 208: local stripe_width=4096 cccchunk_size: 209: ./ceph osd erasure-code-profile get default cccchunk_size: 209: grep k= *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** ccchunk_size: 209: eval local k=2 cccchunk_size: 209: local k=2 ccchunk_size: 210: echo 2048 vverify_chunk_mapping: 226: printf '%*s' 2048 SECONDremap-pool verify_chunk_mapping: 226: local 'payload= FIRSTremap-pool SECONDremap-pool' verify_chunk_mapping: 227: echo -n ' FIRSTremap-pool SECONDremap-pool' verify_chunk_mapping: 229: ./rados --pool remap-pool put SOMETHINGremap-pool test-erasure-code/ORIGINAL verify_chunk_mapping: 230: ./rados --pool remap-pool get SOMETHINGremap-pool test-erasure-code/COPY verify_chunk_mapping: 231: osds=($(get_osds $poolname SOMETHING$poolname)) vverify_chunk_mapping: 231: get_osds remap-pool SOMETHINGremap-pool gget_osds: 73: local poolname=remap-pool gget_osds: 74: local objectname=SOMETHINGremap-pool gget_osds: 76: ./ceph osd map remap-pool SOMETHINGremap-pool gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** verify_chunk_mapping: 231: local -a osds verify_chunk_mapping: 232: (( i = 0 )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 233: ./ceph daemon osd.6 flush_journal *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** admin_socket: exception getting command descriptions: [Errno 2] No such file or directory verify_chunk_mapping: 232: (( i++ )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 233: ./ceph daemon osd.0 flush_journal *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** admin_socket: exception getting command descriptions: [Errno 2] No such file or directory verify_chunk_mapping: 232: (( i++ )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 233: ./ceph daemon osd.4 flush_journal *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** admin_socket: exception getting command descriptions: [Errno 2] No such file or directory verify_chunk_mapping: 232: (( i++ )) verify_chunk_mapping: 232: (( i < 3 )) verify_chunk_mapping: 235: diff test-erasure-code/ORIGINAL test-erasure-code/COPY verify_chunk_mapping: 236: rm test-erasure-code/COPY verify_chunk_mapping: 238: osds=($(get_osds $poolname SOMETHING$poolname)) vverify_chunk_mapping: 238: get_osds remap-pool SOMETHINGremap-pool gget_osds: 73: local poolname=remap-pool gget_osds: 74: local objectname=SOMETHINGremap-pool gget_osds: 76: ./ceph osd map remap-pool SOMETHINGremap-pool gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** verify_chunk_mapping: 238: local -a osds verify_chunk_mapping: 239: grep --quiet --recursive --text FIRSTremap-pool test-erasure-code/0 verify_chunk_mapping: 240: grep --quiet --recursive --text SECONDremap-pool test-erasure-code/4 TEST_chunk_mapping: 269: delete_pool remap-pool delete_pool: 61: local poolname=remap-pool delete_pool: 63: ./ceph osd pool delete remap-pool remap-pool --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'remap-pool' removed TEST_chunk_mapping: 270: ./ceph osd erasure-code-profile rm remap-profile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 41: for TEST_function in '$FUNCTIONS' run: 42: TEST_rados_put_get_isa test-erasure-code TEST_rados_put_get_isa: 152: plugin_exists isa plugin_exists: 99: local plugin=isa plugin_exists: 101: local status plugin_exists: 102: ./ceph osd erasure-code-profile set TESTPROFILE plugin=isa *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** plugin_exists: 103: ./ceph osd crush rule create-erasure TESTRULE TESTPROFILE plugin_exists: 104: grep 'isa.*No such file' Error EIO: load dlopen(.libs/libec_isa.so): .libs/libec_isa.so: cannot open shared object file: No such file or directoryfailed to load plugin using profile TESTPROFILE plugin_exists: 105: status=1 plugin_exists: 110: ./ceph osd erasure-code-profile rm TESTPROFILE *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** plugin_exists: 111: return 1 TEST_rados_put_get_isa: 153: echo 'SKIP because plugin isa has not been built' SKIP because plugin isa has not been built TEST_rados_put_get_isa: 154: return 0 run: 41: for TEST_function in '$FUNCTIONS' run: 42: TEST_rados_put_get_jerasure test-erasure-code TEST_rados_put_get_jerasure: 171: local dir=test-erasure-code TEST_rados_put_get_jerasure: 173: rados_put_get test-erasure-code ecpool rados_put_get: 67: local dir=test-erasure-code rados_put_get: 68: local poolname=ecpool rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 AAA rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 BBB rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 CCCC rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 DDDD rados_put_get: 77: ./rados --pool ecpool put SOMETHING test-erasure-code/ORIGINAL rados_put_get: 78: ./rados --pool ecpool get SOMETHING test-erasure-code/COPY rados_put_get: 79: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 80: rm test-erasure-code/COPY rados_put_get: 87: initial_osds=($(get_osds $poolname SOMETHING)) rrados_put_get: 87: get_osds ecpool SOMETHING gget_osds: 73: local poolname=ecpool gget_osds: 74: local objectname=SOMETHING gget_osds: 76: ./ceph osd map ecpool SOMETHING gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 87: local -a initial_osds rados_put_get: 88: local last=2 rados_put_get: 89: ./ceph osd out 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked out osd.2. rados_put_get: 90: grep '\<2\>' rados_put_get: 90: get_osds ecpool SOMETHING get_osds: 73: local poolname=ecpool get_osds: 74: local objectname=SOMETHING get_osds: 76: ./ceph osd map ecpool SOMETHING get_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 91: ./rados --pool ecpool get SOMETHING test-erasure-code/COPY rados_put_get: 92: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 93: ./ceph osd in 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked in osd.2. rados_put_get: 95: rm test-erasure-code/ORIGINAL TEST_rados_put_get_jerasure: 175: local poolname=pool-jerasure TEST_rados_put_get_jerasure: 176: local profile=profile-jerasure TEST_rados_put_get_jerasure: 178: ./ceph osd erasure-code-profile set profile-jerasure plugin=jerasure k=4 m=2 ruleset-failure-domain=osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_rados_put_get_jerasure: 182: ./ceph osd pool create pool-jerasure 12 12 erasure profile-jerasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool-jerasure' created TEST_rados_put_get_jerasure: 185: rados_put_get test-erasure-code pool-jerasure rados_put_get: 67: local dir=test-erasure-code rados_put_get: 68: local poolname=pool-jerasure rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 AAA rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 BBB rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 CCCC rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 DDDD rados_put_get: 77: ./rados --pool pool-jerasure put SOMETHING test-erasure-code/ORIGINAL rados_put_get: 78: ./rados --pool pool-jerasure get SOMETHING test-erasure-code/COPY rados_put_get: 79: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 80: rm test-erasure-code/COPY rados_put_get: 87: initial_osds=($(get_osds $poolname SOMETHING)) rrados_put_get: 87: get_osds pool-jerasure SOMETHING gget_osds: 73: local poolname=pool-jerasure gget_osds: 74: local objectname=SOMETHING gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' gget_osds: 76: ./ceph osd map pool-jerasure SOMETHING *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 87: local -a initial_osds rados_put_get: 88: local last=5 rados_put_get: 89: ./ceph osd out 5 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked out osd.5. rados_put_get: 90: grep '\<5\>' rados_put_get: 90: get_osds pool-jerasure SOMETHING get_osds: 73: local poolname=pool-jerasure get_osds: 74: local objectname=SOMETHING get_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' get_osds: 76: ./ceph osd map pool-jerasure SOMETHING *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 91: ./rados --pool pool-jerasure get SOMETHING test-erasure-code/COPY rados_put_get: 92: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 93: ./ceph osd in 5 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked in osd.5. rados_put_get: 95: rm test-erasure-code/ORIGINAL TEST_rados_put_get_jerasure: 187: delete_pool pool-jerasure delete_pool: 61: local poolname=pool-jerasure delete_pool: 63: ./ceph osd pool delete pool-jerasure pool-jerasure --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool-jerasure' removed TEST_rados_put_get_jerasure: 188: ./ceph osd erasure-code-profile rm profile-jerasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 41: for TEST_function in '$FUNCTIONS' run: 42: TEST_rados_put_get_lrc_advanced test-erasure-code TEST_rados_put_get_lrc_advanced: 115: local dir=test-erasure-code TEST_rados_put_get_lrc_advanced: 116: local poolname=pool-lrc TEST_rados_put_get_lrc_advanced: 117: local profile=profile-lrc TEST_rados_put_get_lrc_advanced: 119: ./ceph osd erasure-code-profile set profile-lrc plugin=lrc mapping=DD_ 'ruleset-steps=[ [ "chooseleaf", "osd", 0 ] ]' 'layers=[ [ "DDc", "" ] ]' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_rados_put_get_lrc_advanced: 124: ./ceph osd pool create pool-lrc 12 12 erasure profile-lrc *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool-lrc' created TEST_rados_put_get_lrc_advanced: 127: rados_put_get test-erasure-code pool-lrc rados_put_get: 67: local dir=test-erasure-code rados_put_get: 68: local poolname=pool-lrc rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 AAA rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 BBB rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 CCCC rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 DDDD rados_put_get: 77: ./rados --pool pool-lrc put SOMETHING test-erasure-code/ORIGINAL rados_put_get: 78: ./rados --pool pool-lrc get SOMETHING test-erasure-code/COPY rados_put_get: 79: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 80: rm test-erasure-code/COPY rados_put_get: 87: initial_osds=($(get_osds $poolname SOMETHING)) rrados_put_get: 87: get_osds pool-lrc SOMETHING gget_osds: 73: local poolname=pool-lrc gget_osds: 74: local objectname=SOMETHING gget_osds: 76: ./ceph osd map pool-lrc SOMETHING gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 87: local -a initial_osds rados_put_get: 88: local last=2 rados_put_get: 89: ./ceph osd out 6 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked out osd.6. rados_put_get: 90: get_osds pool-lrc SOMETHING rados_put_get: 90: grep '\<6\>' get_osds: 73: local poolname=pool-lrc get_osds: 74: local objectname=SOMETHING get_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' get_osds: 76: ./ceph osd map pool-lrc SOMETHING *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 91: ./rados --pool pool-lrc get SOMETHING test-erasure-code/COPY rados_put_get: 92: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 93: ./ceph osd in 6 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked in osd.6. rados_put_get: 95: rm test-erasure-code/ORIGINAL TEST_rados_put_get_lrc_advanced: 129: delete_pool pool-lrc delete_pool: 61: local poolname=pool-lrc delete_pool: 63: ./ceph osd pool delete pool-lrc pool-lrc --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool-lrc' removed TEST_rados_put_get_lrc_advanced: 130: ./ceph osd erasure-code-profile rm profile-lrc *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 41: for TEST_function in '$FUNCTIONS' run: 42: TEST_rados_put_get_lrc_kml test-erasure-code TEST_rados_put_get_lrc_kml: 134: local dir=test-erasure-code TEST_rados_put_get_lrc_kml: 135: local poolname=pool-lrc TEST_rados_put_get_lrc_kml: 136: local profile=profile-lrc TEST_rados_put_get_lrc_kml: 138: ./ceph osd erasure-code-profile set profile-lrc plugin=lrc k=4 m=2 l=3 ruleset-failure-domain=osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_rados_put_get_lrc_kml: 142: ./ceph osd pool create pool-lrc 12 12 erasure profile-lrc *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool-lrc' created TEST_rados_put_get_lrc_kml: 145: rados_put_get test-erasure-code pool-lrc rados_put_get: 67: local dir=test-erasure-code rados_put_get: 68: local poolname=pool-lrc rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 AAA rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 BBB rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 CCCC rados_put_get: 70: for marker in AAA BBB CCCC DDDD rados_put_get: 71: printf '%*s' 1024 DDDD rados_put_get: 77: ./rados --pool pool-lrc put SOMETHING test-erasure-code/ORIGINAL rados_put_get: 78: ./rados --pool pool-lrc get SOMETHING test-erasure-code/COPY rados_put_get: 79: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 80: rm test-erasure-code/COPY rados_put_get: 87: initial_osds=($(get_osds $poolname SOMETHING)) rrados_put_get: 87: get_osds pool-lrc SOMETHING gget_osds: 73: local poolname=pool-lrc gget_osds: 74: local objectname=SOMETHING gget_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' gget_osds: 76: ./ceph osd map pool-lrc SOMETHING *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 87: local -a initial_osds rados_put_get: 88: local last=7 rados_put_get: 89: ./ceph osd out 8 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked out osd.8. rados_put_get: 90: grep '\<8\>' rados_put_get: 90: get_osds pool-lrc SOMETHING get_osds: 73: local poolname=pool-lrc get_osds: 74: local objectname=SOMETHING get_osds: 77: perl -p -e 's/.*up \(\[(.*?)\].*/$1/; s/,/ /g' get_osds: 76: ./ceph osd map pool-lrc SOMETHING *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** rados_put_get: 91: ./rados --pool pool-lrc get SOMETHING test-erasure-code/COPY rados_put_get: 92: diff test-erasure-code/ORIGINAL test-erasure-code/COPY rados_put_get: 93: ./ceph osd in 8 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked in osd.8. rados_put_get: 95: rm test-erasure-code/ORIGINAL TEST_rados_put_get_lrc_kml: 147: delete_pool pool-lrc delete_pool: 61: local poolname=pool-lrc delete_pool: 63: ./ceph osd pool delete pool-lrc pool-lrc --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool-lrc' removed TEST_rados_put_get_lrc_kml: 148: ./ceph osd erasure-code-profile rm profile-lrc *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 47: delete_pool ecpool delete_pool: 61: local poolname=ecpool delete_pool: 63: ./ceph osd pool delete ecpool ecpool --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'ecpool' removed run: 48: teardown test-erasure-code teardown: 24: local dir=test-erasure-code teardown: 25: kill_daemons test-erasure-code kill_daemons: 60: local dir=test-erasure-code kkill_daemons: 59: grep pidfile kkill_daemons: 59: find test-erasure-code kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/a/pidfile kill_daemons: 62: pid=3736 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 3736 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 3736 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-0.pidfile kill_daemons: 62: pid=3931 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 3931 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 3931 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 3931 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-1.pidfile kill_daemons: 62: pid=4267 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4267 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4267 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4267 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-2.pidfile kill_daemons: 62: pid=4623 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4623 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4623 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4623 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-3.pidfile kill_daemons: 62: pid=4999 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4999 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4999 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 4999 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-4.pidfile kill_daemons: 62: pid=5400 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 5400 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 5400 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-5.pidfile kill_daemons: 62: pid=5816 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 5816 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 5816 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-6.pidfile kill_daemons: 62: pid=6253 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 6253 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 6253 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 6253 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-7.pidfile kill_daemons: 62: pid=6709 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 6709 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 6709 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 6709 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-8.pidfile kill_daemons: 62: pid=7239 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 7239 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 7239 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-9.pidfile kill_daemons: 62: pid=7752 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 7752 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 7752 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 7752 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat test-erasure-code/osd-10.pidfile kill_daemons: 62: pid=8264 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 8264 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 8264 kill_daemons: 64: break teardown: 26: rm -fr test-erasure-code main: 108: code=0 main: 112: teardown test-erasure-code teardown: 24: local dir=test-erasure-code teardown: 25: kill_daemons test-erasure-code kill_daemons: 60: local dir=test-erasure-code kkill_daemons: 59: find test-erasure-code kkill_daemons: 59: grep pidfile find: `test-erasure-code': No such file or directory teardown: 26: rm -fr test-erasure-code main: 113: return 0 PASS: test/erasure-code/test-erasure-code.sh Running main() from gtest_main.cc [==========] Running 92 tests from 7 test cases. [----------] Global test environment set-up. [----------] 1 test from Buffer [ RUN ] Buffer.constructors [ OK ] Buffer.constructors (2 ms) [----------] 1 test from Buffer (2 ms total) [----------] 1 test from BufferRaw [ RUN ] BufferRaw.ostream [ OK ] BufferRaw.ostream (0 ms) [----------] 1 test from BufferRaw (0 ms total) [----------] 14 tests from TestRawPipe [ RUN ] TestRawPipe.create_zero_copy [ OK ] TestRawPipe.create_zero_copy (4 ms) [ RUN ] TestRawPipe.c_str_no_fd [ OK ] TestRawPipe.c_str_no_fd (1 ms) [ RUN ] TestRawPipe.c_str_basic [ OK ] TestRawPipe.c_str_basic (1 ms) [ RUN ] TestRawPipe.c_str_twice [ OK ] TestRawPipe.c_str_twice (1 ms) [ RUN ] TestRawPipe.c_str_basic_offset [ OK ] TestRawPipe.c_str_basic_offset (1 ms) [ RUN ] TestRawPipe.c_str_dest_short [ OK ] TestRawPipe.c_str_dest_short (1 ms) [ RUN ] TestRawPipe.c_str_source_short [ OK ] TestRawPipe.c_str_source_short (1 ms) [ RUN ] TestRawPipe.c_str_explicit_zero_offset [ OK ] TestRawPipe.c_str_explicit_zero_offset (1 ms) [ RUN ] TestRawPipe.c_str_explicit_positive_offset [ OK ] TestRawPipe.c_str_explicit_positive_offset (1 ms) [ RUN ] TestRawPipe.c_str_explicit_positive_empty_result [ OK ] TestRawPipe.c_str_explicit_positive_empty_result (1 ms) [ RUN ] TestRawPipe.c_str_source_short_explicit_offset [ OK ] TestRawPipe.c_str_source_short_explicit_offset (1 ms) [ RUN ] TestRawPipe.c_str_dest_short_explicit_offset [ OK ] TestRawPipe.c_str_dest_short_explicit_offset (1 ms) [ RUN ] TestRawPipe.buffer_list_read_fd_zero_copy [ OK ] TestRawPipe.buffer_list_read_fd_zero_copy (2 ms) [ RUN ] TestRawPipe.buffer_list_write_fd_zero_copy [ OK ] TestRawPipe.buffer_list_write_fd_zero_copy (1 ms) [----------] 14 tests from TestRawPipe (18 ms total) [----------] 17 tests from BufferPtr [ RUN ] BufferPtr.constructors common/buffer.cc: In function 'ceph::buffer::ptr::ptr(const ceph::buffer::ptr&, unsigned int, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.325437 common/buffer.cc: 573: FAILED assert(o+l <= p._len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned int)+0x73) [0x8aa099] 3: (BufferPtr_constructors_Test::TestBody()+0x1976) [0x850a70] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'ceph::buffer::ptr::ptr(const ceph::buffer::ptr&, unsigned int, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.328413 common/buffer.cc: 574: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned int)+0x9e) [0x8aa0c4] 3: (BufferPtr_constructors_Test::TestBody()+0x1a5b) [0x850b55] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.constructors (6 ms) [ RUN ] BufferPtr.assignment [ OK ] BufferPtr.assignment (0 ms) [ RUN ] BufferPtr.clone [ OK ] BufferPtr.clone (0 ms) [ RUN ] BufferPtr.swap [ OK ] BufferPtr.swap (0 ms) [ RUN ] BufferPtr.release [ OK ] BufferPtr.release (0 ms) [ RUN ] BufferPtr.have_raw [ OK ] BufferPtr.have_raw (0 ms) [ RUN ] BufferPtr.at_buffer_head [ OK ] BufferPtr.at_buffer_head (0 ms) [ RUN ] BufferPtr.at_buffer_tail [ OK ] BufferPtr.at_buffer_tail (0 ms) [ RUN ] BufferPtr.is_n_page_sized [ OK ] BufferPtr.is_n_page_sized (0 ms) [ RUN ] BufferPtr.accessors common/buffer.cc: In function 'char* ceph::buffer::ptr::c_str()' thread 2b0a1c097c40 time 2014-10-08 11:14:33.331549 common/buffer.cc: 635: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::c_str()+0x37) [0x8aa353] 3: (BufferPtr_accessors_Test::TestBody()+0x26d) [0x853ec7] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'char& ceph::buffer::ptr::operator[](unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.334500 common/buffer.cc: 656: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int)+0x3a) [0x8aa4ac] 3: (BufferPtr_accessors_Test::TestBody()+0x328) [0x853f82] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char* ceph::buffer::ptr::c_str() const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.337247 common/buffer.cc: 629: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::c_str() const+0x37) [0x8aa2d1] 3: (BufferPtr_accessors_Test::TestBody()+0x4d0) [0x85412a] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char& ceph::buffer::ptr::operator[](unsigned int) const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.340213 common/buffer.cc: 650: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int) const+0x3a) [0x8aa416] 3: (BufferPtr_accessors_Test::TestBody()+0x58b) [0x8541e5] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'char& ceph::buffer::ptr::operator[](unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.343144 common/buffer.cc: 657: FAILED assert(n < _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int)+0x65) [0x8aa4d7] 3: (BufferPtr_accessors_Test::TestBody()+0xca7) [0x854901] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char& ceph::buffer::ptr::operator[](unsigned int) const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.345888 common/buffer.cc: 651: FAILED assert(n < _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::operator[](unsigned int) const+0x65) [0x8aa441] 3: (BufferPtr_accessors_Test::TestBody()+0xd62) [0x8549bc] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'const char* ceph::buffer::ptr::raw_c_str() const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.348849 common/buffer.cc: 661: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::raw_c_str() const+0x37) [0x8aa53f] 3: (BufferPtr_accessors_Test::TestBody()+0xe27) [0x854a81] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'unsigned int ceph::buffer::ptr::raw_length() const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.351735 common/buffer.cc: 662: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::raw_length() const+0x37) [0x8aa583] 3: (BufferPtr_accessors_Test::TestBody()+0xedd) [0x854b37] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'int ceph::buffer::ptr::raw_nref() const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.354731 common/buffer.cc: 663: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::raw_nref() const+0x37) [0x8aa5c7] 3: (BufferPtr_accessors_Test::TestBody()+0xf93) [0x854bed] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.accessors (26 ms) [ RUN ] BufferPtr.cmp [ OK ] BufferPtr.cmp (0 ms) [ RUN ] BufferPtr.is_zero [ OK ] BufferPtr.is_zero (0 ms) [ RUN ] BufferPtr.copy_out ./include/buffer.h: In function 'void ceph::buffer::ptr::copy_out(unsigned int, unsigned int, char*) const' thread 2b0a1c097c40 time 2014-10-08 11:14:33.357563 ./include/buffer.h: 200: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_out(unsigned int, unsigned int, char*) const+0x3c) [0x8787c0] 3: (BufferPtr_copy_out_Test::TestBody()+0x6b) [0x85679d] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.copy_out (3 ms) [ RUN ] BufferPtr.copy_in common/buffer.cc: In function 'void ceph::buffer::ptr::copy_in(unsigned int, unsigned int, const char*)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.360463 common/buffer.cc: 715: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*)+0x42) [0x8aa8e4] 3: (BufferPtr_copy_in_Test::TestBody()+0x6b) [0x856ced] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::copy_in(unsigned int, unsigned int, const char*)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.363424 common/buffer.cc: 717: FAILED assert(o+l <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*)+0x9f) [0x8aa941] 3: (BufferPtr_copy_in_Test::TestBody()+0x15c) [0x856dde] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::copy_in(unsigned int, unsigned int, const char*)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.366291 common/buffer.cc: 716: FAILED assert(o <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*)+0x6d) [0x8aa90f] 3: (BufferPtr_copy_in_Test::TestBody()+0x21f) [0x856ea1] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.copy_in (9 ms) [ RUN ] BufferPtr.append common/buffer.cc: In function 'void ceph::buffer::ptr::append(char)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.369096 common/buffer.cc: 699: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char)+0x3c) [0x8aa784] 3: (BufferPtr_append_Test::TestBody()+0x52) [0x85732e] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::append(const char*, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.371954 common/buffer.cc: 707: FAILED assert(_raw) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char const*, unsigned int)+0x3f) [0x8aa827] 3: (BufferPtr_append_Test::TestBody()+0x106) [0x8573e2] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::append(char)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.378744 common/buffer.cc: 700: FAILED assert(1 <= unused_tail_length()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char)+0x6b) [0x8aa7b3] 3: (BufferPtr_append_Test::TestBody()+0x1d8) [0x8574b4] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. common/buffer.cc: In function 'void ceph::buffer::ptr::append(const char*, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.381603 common/buffer.cc: 708: FAILED assert(l <= unused_tail_length()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::append(char const*, unsigned int)+0x6f) [0x8aa857] 3: (BufferPtr_append_Test::TestBody()+0x28c) [0x857568] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.append (15 ms) [ RUN ] BufferPtr.zero common/buffer.cc: In function 'void ceph::buffer::ptr::zero(unsigned int, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.384665 common/buffer.cc: 730: FAILED assert(o+l <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::ptr::zero(unsigned int, unsigned int)+0x45) [0x8aaa0f] 3: (BufferPtr_zero_Test::TestBody()+0xa0) [0x857cfa] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferPtr.zero (3 ms) [ RUN ] BufferPtr.ostream [ OK ] BufferPtr.ostream (0 ms) [----------] 17 tests from BufferPtr (62 ms total) [----------] 12 tests from BufferListIterator [ RUN ] BufferListIterator.constructors [ OK ] BufferListIterator.constructors (0 ms) [ RUN ] BufferListIterator.operator_equal [ OK ] BufferListIterator.operator_equal (0 ms) [ RUN ] BufferListIterator.get_off [ OK ] BufferListIterator.get_off (0 ms) [ RUN ] BufferListIterator.get_remaining [ OK ] BufferListIterator.get_remaining (0 ms) [ RUN ] BufferListIterator.end [ OK ] BufferListIterator.end (0 ms) [ RUN ] BufferListIterator.advance [ OK ] BufferListIterator.advance (0 ms) [ RUN ] BufferListIterator.seek [ OK ] BufferListIterator.seek (0 ms) [ RUN ] BufferListIterator.operator_star [ OK ] BufferListIterator.operator_star (0 ms) [ RUN ] BufferListIterator.operator_plus_plus [ OK ] BufferListIterator.operator_plus_plus (1 ms) [ RUN ] BufferListIterator.get_current_ptr [ OK ] BufferListIterator.get_current_ptr (0 ms) [ RUN ] BufferListIterator.copy [ OK ] BufferListIterator.copy (0 ms) [ RUN ] BufferListIterator.copy_in [ OK ] BufferListIterator.copy_in (0 ms) [----------] 12 tests from BufferListIterator (1 ms total) [----------] 46 tests from BufferList [ RUN ] BufferList.constructors [ OK ] BufferList.constructors (0 ms) [ RUN ] BufferList.operator_equal [ OK ] BufferList.operator_equal (0 ms) [ RUN ] BufferList.buffers [ OK ] BufferList.buffers (0 ms) [ RUN ] BufferList.swap [ OK ] BufferList.swap (0 ms) [ RUN ] BufferList.length [ OK ] BufferList.length (0 ms) [ RUN ] BufferList.contents_equal [ OK ] BufferList.contents_equal (0 ms) [ RUN ] BufferList.is_page_aligned [ OK ] BufferList.is_page_aligned (0 ms) [ RUN ] BufferList.is_n_page_sized [ OK ] BufferList.is_n_page_sized (0 ms) [ RUN ] BufferList.is_zero [ OK ] BufferList.is_zero (0 ms) [ RUN ] BufferList.clear [ OK ] BufferList.clear (0 ms) [ RUN ] BufferList.push_front [ OK ] BufferList.push_front (0 ms) [ RUN ] BufferList.push_back [ OK ] BufferList.push_back (0 ms) [ RUN ] BufferList.is_contiguous [ OK ] BufferList.is_contiguous (0 ms) [ RUN ] BufferList.rebuild [ OK ] BufferList.rebuild (0 ms) [ RUN ] BufferList.rebuild_page_aligned [ OK ] BufferList.rebuild_page_aligned (0 ms) [ RUN ] BufferList.claim [ OK ] BufferList.claim (0 ms) [ RUN ] BufferList.claim_append [ OK ] BufferList.claim_append (0 ms) [ RUN ] BufferList.claim_prepend [ OK ] BufferList.claim_prepend (0 ms) [ RUN ] BufferList.begin [ OK ] BufferList.begin (0 ms) [ RUN ] BufferList.end [ OK ] BufferList.end (0 ms) [ RUN ] BufferList.copy [ OK ] BufferList.copy (0 ms) [ RUN ] BufferList.copy_in [ OK ] BufferList.copy_in (0 ms) [ RUN ] BufferList.append common/buffer.cc: In function 'void ceph::buffer::list::append(const ceph::buffer::ptr&, unsigned int, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:33.388860 common/buffer.cc: 1257: FAILED assert(len+off <= bp.length()) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsigned int)+0x4f) [0x8acb6d] 3: (BufferList_append_Test::TestBody()+0x1227) [0x86a503] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferList.append (4 ms) [ RUN ] BufferList.append_zero [ OK ] BufferList.append_zero (0 ms) [ RUN ] BufferList.operator_brackets [ OK ] BufferList.operator_brackets (0 ms) [ RUN ] BufferList.c_str [ OK ] BufferList.c_str (0 ms) [ RUN ] BufferList.substr_of [ OK ] BufferList.substr_of (0 ms) [ RUN ] BufferList.splice [ OK ] BufferList.splice (0 ms) [ RUN ] BufferList.write [ OK ] BufferList.write (0 ms) [ RUN ] BufferList.encode_base64 [ OK ] BufferList.encode_base64 (0 ms) [ RUN ] BufferList.decode_base64 [ OK ] BufferList.decode_base64 (0 ms) [ RUN ] BufferList.hexdump [ OK ] BufferList.hexdump (0 ms) [ RUN ] BufferList.read_file [ OK ] BufferList.read_file (11 ms) [ RUN ] BufferList.read_fd [ OK ] BufferList.read_fd (2 ms) [ RUN ] BufferList.write_file bufferlist::write_file(un/like/ly): failed to open file: (2) No such file or directory [ OK ] BufferList.write_file (0 ms) [ RUN ] BufferList.write_fd [ OK ] BufferList.write_fd (2 ms) [ RUN ] BufferList.crc32c [ OK ] BufferList.crc32c (0 ms) [ RUN ] BufferList.crc32c_append [ OK ] BufferList.crc32c_append (12 ms) [ RUN ] BufferList.crc32c_append_perf populating large buffers (a, b=c=d) a.crc32c(0) = 1138817026 at 3929.09 MB/sec a.crc32c(0) (again) = 1138817026 at 8.53333e+07 MB/sec a.crc32c(5) = 3239494520 at 20833.3 MB/sec a.crc32c(5) (again) = 3239494520 at 20774.2 MB/sec b.crc32c(0) = 2481791210 at 3803.98 MB/sec b.crc32c(0) (again)= 2481791210 at 1.28e+08 MB/sec ab.crc32c(0) = 2988268779 at 41423.9 MB/sec ac.crc32c(0) = 2988268779 at 7560.66 MB/sec ba.crc32c(0) = 169240695 at 42136.4 MB/sec ba.crc32c(5) = 1265464778 at 20901.4 MB/sec crc cache hits (same start) = 5 crc cache hits (adjusted) = 207 [ OK ] BufferList.crc32c_append_perf (2785 ms) [ RUN ] BufferList.compare [ OK ] BufferList.compare (0 ms) [ RUN ] BufferList.ostream buffer::list(len=6, buffer::ptr(0~3 0x3f34460 in raw 0x3f34460 len 3 nref 1), buffer::ptr(0~3 0x3f34220 in raw 0x3f34220 len 3 nref 1) ) [ OK ] BufferList.ostream (0 ms) [ RUN ] BufferList.zero common/buffer.cc: In function 'void ceph::buffer::list::zero(unsigned int, unsigned int)' thread 2b0a1c097c40 time 2014-10-08 11:14:36.204501 common/buffer.cc: 1057: FAILED assert(o+l <= _len) ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x95) [0x8a87ad] 2: (ceph::buffer::list::zero(unsigned int, unsigned int)+0x47) [0x8abe07] 3: (BufferList_zero_Test::TestBody()+0x45e) [0x874bf2] 4: (testing::Test::Run()+0x95) [0x88a9ff] 5: (testing::internal::TestInfoImpl::Run()+0xd7) [0x88af65] 6: (testing::TestCase::Run()+0xca) [0x88b470] 7: (testing::internal::UnitTestImpl::RunAllTests()+0x272) [0x88f9ec] 8: (testing::UnitTest::Run()+0x1c) [0x88e94e] 9: (main()+0x3e) [0x8a8486] 10: (__libc_start_main()+0xed) [0x2b0a1bcf876d] 11: ./unittest_bufferlist() [0x848a59] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. [ OK ] BufferList.zero (3 ms) [ RUN ] BufferList.EmptyAppend [ OK ] BufferList.EmptyAppend (0 ms) [ RUN ] BufferList.TestPtrAppend [ OK ] BufferList.TestPtrAppend (24 ms) [ RUN ] BufferList.TestDirectAppend [ OK ] BufferList.TestDirectAppend (22 ms) [ RUN ] BufferList.TestCopyAll [ OK ] BufferList.TestCopyAll (79 ms) [----------] 46 tests from BufferList (2944 ms total) [----------] 1 test from BufferHash [ RUN ] BufferHash.all [ OK ] BufferHash.all (0 ms) [----------] 1 test from BufferHash (0 ms total) [----------] Global test environment tear-down [==========] 92 tests from 7 test cases ran. (3027 ms total) [ PASSED ] 92 tests. PASS: unittest_bufferlist.sh checking ceph-dencoder generated test instances... numgen type 3 ACLGrant 2 ACLGranteeType 2 ACLOwner 2 ACLPermission 3 AuthMonitor::Incremental 2 BloomHitSet 2 Capability copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 2 CompatSet 1 CrushWrapper copy operator= not supported copy ctor not supported 2 DBObjectMap::State 2 DBObjectMap::_Header 2 DecayCounter 2 ECSubRead 2 ECSubReadReply 3 ECSubWrite 2 ECSubWriteReply 2 ECUtil::HashInfo 2 ECommitted 1 EExport 2 EFragment 2 EImportFinish 1 EImportStart 1 EMetaBlob 1 EMetaBlob::dirlump 1 EMetaBlob::fullbit copy operator= not supported copy ctor not supported 2 EMetaBlob::nullbit 1 EMetaBlob::remotebit 2 EOpen 1 EResetJournal 1 ESession 1 ESessions 1 ESlaveUpdate 1 ESubtreeMap 1 ETableClient 1 ETableServer 1 EUpdate 2 ExplicitHashHitSet 2 ExplicitObjectHitSet 4 HitSet 8 HitSet::Params 1 InoTable 1 InodeStore 2 JournalPointer 3 Journaler::Header 1 LogEntry 2 LogEntryKey 1 LogSummary 0 MAuth 0 MAuthReply 0 MCacheExpire 0 MClientCapRelease 0 MClientCaps 0 MClientLease 0 MClientReconnect 0 MClientReply 0 MClientRequest 0 MClientRequestForward 0 MClientSession 0 MClientSnap 0 MCommand 0 MCommandReply 3 MDSCacheObjectInfo 1 MDSMap copy operator= not supported copy ctor not supported 2 MDSMap::mds_info_t copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 0 MDentryLink 0 MDentryUnlink 0 MDirUpdate 0 MDiscover 0 MDiscoverReply 0 MExportCaps 0 MExportCapsAck 0 MExportDir 0 MExportDirAck 0 MExportDirCancel 0 MExportDirDiscover 0 MExportDirDiscoverAck 0 MExportDirFinish 0 MExportDirNotify 0 MExportDirNotifyAck 0 MExportDirPrep 0 MExportDirPrepAck 0 MForward 0 MGetPoolStats 0 MGetPoolStatsReply 0 MHeartbeat 0 MInodeFileCaps 0 MLock 0 MLog 0 MLogAck 0 MMDSBeacon 0 MMDSCacheRejoin 0 MMDSFindIno 0 MMDSFindInoReply 0 MMDSFragmentNotify 0 MMDSLoadTargets 0 MMDSMap 0 MMDSResolve 0 MMDSResolveAck 0 MMDSSlaveRequest 0 MMDSTableRequest 0 MMonCommand 0 MMonCommandAck 0 MMonElection 0 MMonGetMap 0 MMonGetVersion 0 MMonGetVersionReply 0 MMonGlobalID 0 MMonJoin 0 MMonMap 0 MMonPaxos 0 MMonProbe 0 MMonScrub 0 MMonSubscribe 0 MMonSubscribeAck 0 MMonSync 0 MOSDAlive 0 MOSDBoot 0 MOSDFailure 0 MOSDMap 0 MOSDOp 0 MOSDOpReply 0 MOSDPGBackfill 0 MOSDPGCreate 0 MOSDPGInfo 0 MOSDPGLog 0 MOSDPGMissing 0 MOSDPGNotify 0 MOSDPGQuery 0 MOSDPGRemove 0 MOSDPGScan 0 MOSDPGTemp 0 MOSDPGTrim 0 MOSDPing 0 MOSDRepScrub 0 MOSDScrub 0 MOSDSubOp 0 MOSDSubOpReply 0 MPGStats 0 MPGStatsAck 0 MPing 0 MPoolOp 0 MPoolOpReply 0 MRemoveSnaps 0 MRoute 0 MStatfs 0 MStatfsReply 0 MWatchNotify 8 MonCap 2 MonMap copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 1 MonitorDBStore::Op 2 MonitorDBStore::Transaction 2 OSDMap 1 OSDMap::Incremental 3 OSDSuperblock 2 ObjectCacheInfo 2 ObjectMetaInfo 1 ObjectRecoveryInfo 2 ObjectRecoveryProgress 4 ObjectStore::Transaction 2 PGMap 4 PGMap::Incremental 3 PullOp 3 PushOp 3 PushReplyOp 2 RGWAccessControlList 1 RGWAccessControlPolicy 2 RGWAccessKey 2 RGWBucketEnt 2 RGWBucketInfo 1 RGWCacheNotifyInfo 2 RGWObjManifest 2 RGWObjManifestPart 2 RGWSubUser 2 RGWUploadPartInfo 2 RGWUserInfo 2 ScrubMap 3 ScrubMap::object 3 SequencerPosition 1 SessionMap 2 SloppyCRCMap 3 SnapContext 2 SnapInfo 4 SnapRealmInfo 2 SnapServer 3 SnapSet 3 bloom_filter 1 cap_reconnect_t 2 client_writeable_range_t 3 clone_info 2 cls_lock_break_op 2 cls_lock_get_info_op 2 cls_lock_get_info_reply 2 cls_lock_list_locks_reply 2 cls_lock_lock_op 2 cls_lock_unlock_op 2 cls_rbd_parent 3 cls_rbd_snap 2 cls_refcount_get_op 2 cls_refcount_put_op 2 cls_refcount_read_op 2 cls_refcount_read_ret 2 cls_refcount_set_op 2 cls_replica_log_bound 2 cls_replica_log_delete_marker_op 1 cls_replica_log_get_bounds_op 3 cls_replica_log_get_bounds_ret 2 cls_replica_log_item_marker 2 cls_replica_log_progress_marker 2 cls_replica_log_set_marker_op 2 cls_rgw_bi_log_list_op 2 cls_rgw_bi_log_list_ret 2 cls_rgw_bi_log_trim_op 2 cls_rgw_gc_defer_entry_op 2 cls_rgw_gc_list_op 2 cls_rgw_gc_list_ret 2 cls_rgw_gc_obj_info 2 cls_rgw_gc_remove_op 2 cls_rgw_gc_set_entry_op 2 cls_rgw_obj 1 cls_rgw_obj_chain 2 cls_user_bucket 2 cls_user_bucket_entry 2 cls_user_complete_stats_sync_op 1 cls_user_get_header_op 2 cls_user_get_header_ret 2 cls_user_header 2 cls_user_list_buckets_op 2 cls_user_list_buckets_ret 2 cls_user_remove_bucket_op 2 cls_user_set_buckets_op 2 cls_user_stats 5 coll_t 3 compressible_bloom_filter 1 dirfrag_load_vec_t 3 entity_addr_t 4 entity_name_t 5 filepath 2 fnode_t 2 frag_info_t 11 ghobject_t 5 hobject_t 2 inode_backpointer_t 2 inode_backtrace_t 1 inode_load_vec_t 2 inode_t 1 link_rollback 1 mds_load_t 2 mds_table_pending_t 2 nest_info_t 2 obj_list_snap_response_t 4 object_copy_cursor_t 3 object_copy_data_t 1 object_info_t 6 object_locator_t 2 object_stat_collection_t 1 object_stat_sum_t 2 objectstore_perf_stat_t 2 old_inode_t 2 old_rstat_t 2 osd_info_t 2 osd_peer_stat_t 2 osd_reqid_t 2 osd_stat_t 2 osd_xinfo_t 2 pg_create_t 2 pg_history_t 2 pg_hit_set_history_t 2 pg_hit_set_info_t 2 pg_info_t 2 pg_interval_t 2 pg_log_entry_t 2 pg_log_t 2 pg_ls_response_t 2 pg_missing_t 2 pg_missing_t::item 4 pg_pool_t copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 5 pg_query_t copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 3 pg_stat_t 4 pg_t 2 pool_snap_info_t copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 2 pool_stat_t copy operator= not supported copy ctor not supported copy operator= not supported copy ctor not supported 2 pow2_hist_t 2 rados::cls::lock::locker_id_t 2 rados::cls::lock::locker_info_t 1 rename_rollback 1 rename_rollback::drec 4 request_redirect_t 2 rgw_bi_log_entry 2 rgw_bucket 2 rgw_bucket_category_stats 4 rgw_bucket_dir 3 rgw_bucket_dir_entry 2 rgw_bucket_dir_entry_meta 3 rgw_bucket_dir_header 2 rgw_bucket_entry_ver 2 rgw_bucket_pending_info 2 rgw_cls_list_op 5 rgw_cls_list_ret 2 rgw_cls_obj_complete_op 2 rgw_cls_obj_prepare_op 2 rgw_cls_tag_timeout_op 2 rgw_intent_log_entry 2 rgw_log_entry 2 rgw_obj 1 rmdir_rollback 2 session_info_t 2 snaplink_t 2 sr_t 3 string_snap_t 2 watch_info_t passed 1413 tests. PASS: test/encoding/check-generated.sh main: 105: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create main: 106: local code main: 107: run osd-pool-create run: 21: local dir=osd-pool-create run: 23: export CEPH_ARGS rrun: 24: uuidgen run: 24: CEPH_ARGS+='--fsid=c17d2312-d08d-4ed5-97de-cfc0d86af816 --auth-supported=none ' run: 25: CEPH_ARGS+='--mon-host=127.0.0.1 ' rrun: 27: set rrun: 27: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' run: 27: FUNCTIONS='TEST_default_deprectated_0 TEST_default_deprectated_1 TEST_default_deprectated_2 TEST_erasure_code_pool TEST_erasure_code_pool_lrc TEST_erasure_code_profile_default TEST_erasure_crush_rule TEST_erasure_crush_stripe_width TEST_erasure_crush_stripe_width_padded TEST_erasure_invalid_profile TEST_replicated_pool TEST_replicated_pool_with_ruleset' run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_default_deprectated_0 osd-pool-create TEST_default_deprectated_0: 36: local dir=osd-pool-create TEST_default_deprectated_0: 38: expected=66 TEST_default_deprectated_0: 39: run_mon osd-pool-create a --public-addr 127.0.0.1 --osd_pool_default_crush_replicated_ruleset 66 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 --osd_pool_default_crush_replicated_ruleset 66 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 --osd_pool_default_crush_replicated_ruleset 66 TEST_default_deprectated_0: 41: ./ceph --format json osd dump TEST_default_deprectated_0: 41: grep '"crush_ruleset":66' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":1,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:15:52.938941","modified":"2014-10-08 11:15:52.938941","flags":"","cluster_snapshot":"","pool_max":0,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":66,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_default_deprectated_0: 42: CEPH_ARGS= TEST_default_deprectated_0: 42: ./ceph --admin-daemon osd-pool-create/a/ceph-mon.a.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_default_deprectated_0: 43: grep 'osd_pool_default_crush_rule is deprecated ' osd-pool-create/a/log run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=17341 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17341 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17341 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17341 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_default_deprectated_1 osd-pool-create TEST_default_deprectated_1: 47: local dir=osd-pool-create TEST_default_deprectated_1: 49: expected=55 TEST_default_deprectated_1: 50: run_mon osd-pool-create a --public-addr 127.0.0.1 --osd_pool_default_crush_rule 55 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 --osd_pool_default_crush_rule 55 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 --osd_pool_default_crush_rule 55 TEST_default_deprectated_1: 52: ./ceph --format json osd dump TEST_default_deprectated_1: 52: grep '"crush_ruleset":55' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":1,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:15:56.356498","modified":"2014-10-08 11:15:56.356498","flags":"","cluster_snapshot":"","pool_max":0,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":55,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_default_deprectated_1: 53: CEPH_ARGS= TEST_default_deprectated_1: 53: ./ceph --admin-daemon osd-pool-create/a/ceph-mon.a.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_default_deprectated_1: 54: grep 'osd_pool_default_crush_rule is deprecated ' osd-pool-create/a/log 2014-10-08 11:15:56.344980 2adb99542f40 0 osd_pool_default_crush_rule is deprecated use osd_pool_default_crush_replicated_ruleset instead 2014-10-08 11:15:56.356240 2adb9ae5e700 0 osd_pool_default_crush_rule is deprecated use osd_pool_default_crush_replicated_ruleset instead run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=17421 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17421 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17421 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_default_deprectated_2 osd-pool-create TEST_default_deprectated_2: 58: local dir=osd-pool-create TEST_default_deprectated_2: 59: expected=77 TEST_default_deprectated_2: 60: unexpected=33 TEST_default_deprectated_2: 61: run_mon osd-pool-create a --public-addr 127.0.0.1 --osd_pool_default_crush_rule 77 --osd_pool_default_crush_replicated_ruleset 33 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 --osd_pool_default_crush_rule 77 --osd_pool_default_crush_replicated_ruleset 33 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 --osd_pool_default_crush_rule 77 --osd_pool_default_crush_replicated_ruleset 33 TEST_default_deprectated_2: 64: ./ceph --format json osd dump TEST_default_deprectated_2: 64: grep '"crush_ruleset":77' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":1,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:15:57.016022","modified":"2014-10-08 11:15:57.016022","flags":"","cluster_snapshot":"","pool_max":0,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":77,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_default_deprectated_2: 65: ./ceph --format json osd dump TEST_default_deprectated_2: 65: grep '"crush_ruleset":33' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_default_deprectated_2: 66: CEPH_ARGS= TEST_default_deprectated_2: 66: ./ceph --admin-daemon osd-pool-create/a/ceph-mon.a.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_default_deprectated_2: 67: grep 'osd_pool_default_crush_rule is deprecated ' osd-pool-create/a/log 2014-10-08 11:15:57.006671 2b4bb2104f40 0 osd_pool_default_crush_rule is deprecated use osd_pool_default_crush_replicated_ruleset instead 2014-10-08 11:15:57.015798 2b4bb3a20700 0 osd_pool_default_crush_rule is deprecated use osd_pool_default_crush_replicated_ruleset instead run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=17500 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17500 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17500 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17500 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_code_pool osd-pool-create TEST_erasure_code_pool: 150: local dir=osd-pool-create TEST_erasure_code_pool: 151: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_erasure_code_pool: 152: ./ceph --format json osd dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_code_pool: 153: local 'expected="erasure_code_profile":"default"' TEST_erasure_code_pool: 154: grep '"erasure_code_profile":"default"' osd-pool-create/osd.json TEST_erasure_code_pool: 155: ./ceph osd pool create erasurecodes 12 12 erasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'erasurecodes' created TEST_erasure_code_pool: 156: ./ceph --format json osd dump TEST_erasure_code_pool: 156: tee osd-pool-create/osd.json *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":3,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:15:58.988138","modified":"2014-10-08 11:15:59.740546","flags":"","cluster_snapshot":"","pool_max":1,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":0,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0},{"pool":1,"pool_name":"erasurecodes","flags":1,"flags_names":"hashpspool","type":3,"size":3,"min_size":2,"crush_ruleset":1,"object_hash":2,"pg_num":12,"pg_placement_num":12,"crash_replay_interval":0,"last_change":"3","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"default","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":4096,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_erasure_code_pool: 157: grep '"erasure_code_profile":"default"' osd-pool-create/osd.json TEST_erasure_code_pool: 159: ./ceph osd pool create erasurecodes 12 12 erasure TEST_erasure_code_pool: 160: grep 'already exists' pool 'erasurecodes' already exists TEST_erasure_code_pool: 161: ./ceph osd pool create erasurecodes 12 12 TEST_erasure_code_pool: 162: grep 'cannot change to type replicated' Error EINVAL: pool 'erasurecodes' cannot change to type replicated run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=17622 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17622 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17622 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17622 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_code_pool_lrc osd-pool-create TEST_erasure_code_pool_lrc: 186: local dir=osd-pool-create TEST_erasure_code_pool_lrc: 187: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_erasure_code_pool_lrc: 189: ./ceph osd erasure-code-profile set LRCprofile plugin=lrc mapping=DD_ 'layers=[ [ "DDc", "" ] ]' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_code_pool_lrc: 194: ./ceph --format json osd dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_code_pool_lrc: 195: local 'expected="erasure_code_profile":"LRCprofile"' TEST_erasure_code_pool_lrc: 196: local poolname=erasurecodes TEST_erasure_code_pool_lrc: 197: grep '"erasure_code_profile":"LRCprofile"' osd-pool-create/osd.json TEST_erasure_code_pool_lrc: 198: ./ceph osd pool create erasurecodes 12 12 erasure LRCprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'erasurecodes' created TEST_erasure_code_pool_lrc: 199: ./ceph --format json osd dump TEST_erasure_code_pool_lrc: 199: tee osd-pool-create/osd.json *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":4,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:16:02.178556","modified":"2014-10-08 11:16:03.849172","flags":"","cluster_snapshot":"","pool_max":1,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":0,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0},{"pool":1,"pool_name":"erasurecodes","flags":1,"flags_names":"hashpspool","type":3,"size":3,"min_size":2,"crush_ruleset":1,"object_hash":2,"pg_num":12,"pg_placement_num":12,"crash_replay_interval":0,"last_change":"4","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"LRCprofile","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":4096,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"LRCprofile":{"directory":".libs","layers":"[ [ \"DDc\", \"\" ] ]","mapping":"DD_","plugin":"lrc"},"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_erasure_code_pool_lrc: 200: grep '"erasure_code_profile":"LRCprofile"' osd-pool-create/osd.json TEST_erasure_code_pool_lrc: 201: ./ceph osd crush rule ls TEST_erasure_code_pool_lrc: 201: grep erasurecodes *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "erasurecodes"] run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=17860 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17860 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17860 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 17860 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_code_profile_default osd-pool-create TEST_erasure_code_profile_default: 111: local dir=osd-pool-create TEST_erasure_code_profile_default: 112: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_erasure_code_profile_default: 113: ./ceph osd erasure-code-profile rm default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_code_profile_default: 114: ./ceph osd erasure-code-profile ls TEST_erasure_code_profile_default: 114: grep default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_code_profile_default: 115: ./ceph osd pool create 12 12 erasure default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** erasure not valid: erasure doesn't represent an int pool '12' created TEST_erasure_code_profile_default: 116: ./ceph osd erasure-code-profile ls TEST_erasure_code_profile_default: 116: grep default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** default run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=18096 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18096 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18096 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18096 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_crush_rule osd-pool-create TEST_erasure_crush_rule: 81: local dir=osd-pool-create TEST_erasure_crush_rule: 82: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_erasure_crush_rule: 86: local crush_ruleset=myruleset TEST_erasure_crush_rule: 87: ./ceph osd crush rule ls TEST_erasure_crush_rule: 87: grep myruleset *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_crush_rule: 88: ./ceph osd crush rule create-erasure myruleset *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** created ruleset myruleset at 1 TEST_erasure_crush_rule: 89: ./ceph osd crush rule ls TEST_erasure_crush_rule: 89: grep myruleset *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "myruleset"] TEST_erasure_crush_rule: 90: local poolname TEST_erasure_crush_rule: 91: poolname=pool_erasure1 TEST_erasure_crush_rule: 92: ./ceph --format json osd dump TEST_erasure_crush_rule: 92: grep '"crush_ruleset":1' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_crush_rule: 93: ./ceph osd pool create pool_erasure1 12 12 erasure default myruleset *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool_erasure1' created TEST_erasure_crush_rule: 94: ./ceph --format json osd dump TEST_erasure_crush_rule: 94: grep '"crush_ruleset":1' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":3,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:16:10.290912","modified":"2014-10-08 11:16:11.977212","flags":"","cluster_snapshot":"","pool_max":1,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":0,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0},{"pool":1,"pool_name":"pool_erasure1","flags":1,"flags_names":"hashpspool","type":3,"size":3,"min_size":2,"crush_ruleset":1,"object_hash":2,"pg_num":12,"pg_placement_num":12,"crash_replay_interval":0,"last_change":"3","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"default","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":4096,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_erasure_crush_rule: 98: poolname=pool_erasure2 TEST_erasure_crush_rule: 99: ./ceph osd erasure-code-profile set myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_erasure_crush_rule: 100: ./ceph osd pool create pool_erasure2 12 12 erasure myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool_erasure2' created TEST_erasure_crush_rule: 101: ./ceph osd crush rule ls TEST_erasure_crush_rule: 101: grep pool_erasure2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "pool_erasure2"] TEST_erasure_crush_rule: 106: poolname=pool_erasure3 TEST_erasure_crush_rule: 107: ./ceph osd pool create pool_erasure3 12 12 erasure myprofile INVALIDRULESET *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error ENOENT: specified ruleset INVALIDRULESET doesn't exist run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=18288 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18288 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18288 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18288 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_crush_stripe_width osd-pool-create TEST_erasure_crush_stripe_width: 120: local dir=osd-pool-create TEST_erasure_crush_stripe_width: 122: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TTEST_erasure_crush_stripe_width: 123: ./ceph-conf --show-config-value osd_pool_erasure_code_stripe_width TEST_erasure_crush_stripe_width: 123: stripe_width=4096 TEST_erasure_crush_stripe_width: 124: ./ceph osd pool create pool_erasure 12 12 erasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool_erasure' created TEST_erasure_crush_stripe_width: 125: ./ceph --format json osd dump TEST_erasure_crush_stripe_width: 125: tee osd-pool-create/osd.json *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"epoch":3,"fsid":"c17d2312-d08d-4ed5-97de-cfc0d86af816","created":"2014-10-08 11:16:15.530506","modified":"2014-10-08 11:16:16.087600","flags":"","cluster_snapshot":"","pool_max":1,"max_osd":0,"pools":[{"pool":0,"pool_name":"rbd","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_ruleset":0,"object_hash":2,"pg_num":64,"pg_placement_num":64,"crash_replay_interval":0,"last_change":"1","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":0,"cache_target_full_ratio_micro":0,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":0,"expected_num_objects":0},{"pool":1,"pool_name":"pool_erasure","flags":1,"flags_names":"hashpspool","type":3,"size":3,"min_size":2,"crush_ruleset":1,"object_hash":2,"pg_num":12,"pg_placement_num":12,"crash_replay_interval":0,"last_change":"3","last_force_op_resend":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"default","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"min_read_recency_for_promote":0,"stripe_width":4096,"expected_num_objects":0}],"osds":[],"osd_xinfo":[],"pg_temp":[],"primary_temp":[],"blacklist":[],"erasure_code_profiles":{"default":{"directory":".libs","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}}} TEST_erasure_crush_stripe_width: 126: grep '"stripe_width":4096' osd-pool-create/osd.json run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=18734 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18734 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18734 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18734 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_crush_stripe_width_padded osd-pool-create TEST_erasure_crush_stripe_width_padded: 130: local dir=osd-pool-create TEST_erasure_crush_stripe_width_padded: 133: profile+=' plugin=jerasure' TEST_erasure_crush_stripe_width_padded: 134: profile+=' technique=reed_sol_van' TEST_erasure_crush_stripe_width_padded: 135: k=4 TEST_erasure_crush_stripe_width_padded: 136: profile+=' k=4' TEST_erasure_crush_stripe_width_padded: 137: profile+=' m=2' TEST_erasure_crush_stripe_width_padded: 138: expected_chunk_size=2048 TEST_erasure_crush_stripe_width_padded: 139: actual_stripe_width=8192 TEST_erasure_crush_stripe_width_padded: 140: desired_stripe_width=8191 TEST_erasure_crush_stripe_width_padded: 141: run_mon osd-pool-create a --public-addr 127.0.0.1 --osd_pool_erasure_code_stripe_width 8191 --osd_pool_default_erasure_code_profile ' plugin=jerasure technique=reed_sol_van k=4 m=2' run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 --osd_pool_erasure_code_stripe_width 8191 --osd_pool_default_erasure_code_profile ' plugin=jerasure technique=reed_sol_van k=4 m=2' ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 --osd_pool_erasure_code_stripe_width 8191 --osd_pool_default_erasure_code_profile ' plugin=jerasure technique=reed_sol_van k=4 m=2' TEST_erasure_crush_stripe_width_padded: 144: ./ceph osd pool create pool_erasure 12 12 erasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'pool_erasure' created TEST_erasure_crush_stripe_width_padded: 145: ./ceph osd dump TEST_erasure_crush_stripe_width_padded: 145: tee osd-pool-create/osd.json *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** epoch 3 fsid c17d2312-d08d-4ed5-97de-cfc0d86af816 created 2014-10-08 11:16:17.636245 modified 2014-10-08 11:16:18.012775 flags pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0 pool 1 'pool_erasure' erasure size 6 min_size 4 crush_ruleset 1 object_hash rjenkins pg_num 12 pgp_num 12 last_change 3 flags hashpspool stripe_width 8192 max_osd 0 TEST_erasure_crush_stripe_width_padded: 146: grep 'stripe_width 8192' osd-pool-create/osd.json run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=18848 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18848 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18848 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18848 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_erasure_invalid_profile osd-pool-create TEST_erasure_invalid_profile: 72: local dir=osd-pool-create TEST_erasure_invalid_profile: 73: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_erasure_invalid_profile: 74: local poolname=pool_erasure TEST_erasure_invalid_profile: 75: local notaprofile=not-a-valid-erasure-code-profile TEST_erasure_invalid_profile: 76: ./ceph osd pool create pool_erasure 12 12 erasure not-a-valid-erasure-code-profile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: cannot determine the erasure code plugin because there is no 'plugin' entry in the erasure_code_profile {} TEST_erasure_invalid_profile: 77: ./ceph osd erasure-code-profile ls TEST_erasure_invalid_profile: 77: grep not-a-valid-erasure-code-profile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=18960 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18960 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18960 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 18960 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_replicated_pool osd-pool-create TEST_replicated_pool: 205: local dir=osd-pool-create TEST_replicated_pool: 206: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_replicated_pool: 207: ./ceph osd pool create replicated 12 12 replicated replicated_ruleset TEST_replicated_pool: 208: grep 'pool '\''replicated'\'' created' pool 'replicated' created TEST_replicated_pool: 209: ./ceph osd pool create replicated 12 12 replicated replicated_ruleset TEST_replicated_pool: 210: grep 'already exists' pool 'replicated' already exists TEST_replicated_pool: 211: ./ceph osd pool create replicated0 12 12 replicated INVALIDRULESET *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error ENOENT: specified ruleset INVALIDRULESET doesn't exist TEST_replicated_pool: 213: ./ceph osd pool create replicated1 12 12 TEST_replicated_pool: 214: grep 'pool '\''replicated1'\'' created' pool 'replicated1' created TEST_replicated_pool: 216: ./ceph osd pool create replicated2 12 TEST_replicated_pool: 217: grep 'pool '\''replicated2'\'' created' pool 'replicated2' created TEST_replicated_pool: 218: ./ceph osd pool create replicated 12 12 erasure TEST_replicated_pool: 219: grep 'cannot change to type erasure' Error EINVAL: pool 'replicated' cannot change to type erasure run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=19071 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19071 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19071 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19071 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create run: 28: for TEST_function in '$FUNCTIONS' run: 29: setup osd-pool-create setup: 18: local dir=osd-pool-create setup: 19: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create setup: 20: mkdir osd-pool-create run: 30: TEST_replicated_pool_with_ruleset osd-pool-create TEST_replicated_pool_with_ruleset: 166: local dir=osd-pool-create TEST_replicated_pool_with_ruleset: 167: run_mon osd-pool-create a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-pool-create run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-pool-create/a --run-dir=osd-pool-create/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to c17d2312-d08d-4ed5-97de-cfc0d86af816 ./ceph-mon: created monfs at osd-pool-create/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-pool-create/a --log-file=osd-pool-create/a/log --mon-cluster-log-file=osd-pool-create/a/log --run-dir=osd-pool-create/a --pid-file=osd-pool-create/a/pidfile --public-addr 127.0.0.1 TEST_replicated_pool_with_ruleset: 168: local ruleset=ruleset0 TEST_replicated_pool_with_ruleset: 169: local root=host1 TEST_replicated_pool_with_ruleset: 170: ./ceph osd crush add-bucket host1 host *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added bucket host1 type host to crush map TEST_replicated_pool_with_ruleset: 171: local failure_domain=osd TEST_replicated_pool_with_ruleset: 172: local poolname=mypool TEST_replicated_pool_with_ruleset: 173: ./ceph osd crush rule create-simple ruleset0 host1 osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_replicated_pool_with_ruleset: 174: ./ceph osd crush rule ls TEST_replicated_pool_with_ruleset: 174: grep ruleset0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "ruleset0"] TEST_replicated_pool_with_ruleset: 175: ./ceph osd pool create mypool 12 12 replicated ruleset0 TEST_replicated_pool_with_ruleset: 176: grep 'pool '\''mypool'\'' created' pool 'mypool' created TTEST_replicated_pool_with_ruleset: 177: ./ceph osd crush rule dump ruleset0 TTEST_replicated_pool_with_ruleset: 177: grep rule_id TTEST_replicated_pool_with_ruleset: 177: awk '-F[ :,]' '{print $4}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_replicated_pool_with_ruleset: 177: rule_id=1 TEST_replicated_pool_with_ruleset: 178: ./ceph osd pool get mypool crush_ruleset TEST_replicated_pool_with_ruleset: 179: grep 'crush_ruleset: 1' crush_ruleset: 1 TEST_replicated_pool_with_ruleset: 181: ./ceph osd pool create newpool 12 12 replicated non-existent TEST_replicated_pool_with_ruleset: 182: grep 'doesn'\''t exist' Error ENOENT: specified ruleset non-existent doesn't exist run: 31: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-pool-create/a/pidfile kill_daemons: 62: pid=19375 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19375 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19375 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19375 kill_daemons: 64: break teardown: 26: rm -fr osd-pool-create main: 108: code=0 main: 112: teardown osd-pool-create teardown: 24: local dir=osd-pool-create teardown: 25: kill_daemons osd-pool-create kill_daemons: 60: local dir=osd-pool-create kkill_daemons: 59: find osd-pool-create kkill_daemons: 59: grep pidfile find: `osd-pool-create': No such file or directory teardown: 26: rm -fr osd-pool-create main: 113: return 0 PASS: test/mon/osd-pool-create.sh main: 105: setup misc setup: 18: local dir=misc setup: 19: teardown misc teardown: 24: local dir=misc teardown: 25: kill_daemons misc kill_daemons: 60: local dir=misc kkill_daemons: 59: find misc kkill_daemons: 59: grep pidfile find: `misc': No such file or directory teardown: 26: rm -fr misc setup: 20: mkdir misc main: 106: local code main: 107: run misc run: 20: local dir=misc run: 22: export CEPH_ARGS rrun: 23: uuidgen run: 23: CEPH_ARGS+='--fsid=0eb10ac2-1058-4a80-81a1-eb164eff5e30 --auth-supported=none ' run: 24: CEPH_ARGS+='--mon-host=127.0.0.1 ' run: 26: setup misc setup: 18: local dir=misc setup: 19: teardown misc teardown: 24: local dir=misc teardown: 25: kill_daemons misc kill_daemons: 60: local dir=misc kkill_daemons: 59: find misc kkill_daemons: 59: grep pidfile teardown: 26: rm -fr misc setup: 20: mkdir misc run: 27: run_mon misc a --public-addr 127.0.0.1 run_mon: 30: local dir=misc run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=misc/a --run-dir=misc/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 0eb10ac2-1058-4a80-81a1-eb164eff5e30 ./ceph-mon: created monfs at misc/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=misc/a --log-file=misc/a/log --mon-cluster-log-file=misc/a/log --run-dir=misc/a --pid-file=misc/a/pidfile --public-addr 127.0.0.1 rrun: 28: set rrun: 28: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' run: 28: FUNCTIONS=TEST_osd_pool_get_set run: 29: for TEST_function in '$FUNCTIONS' run: 30: TEST_osd_pool_get_set misc TEST_osd_pool_get_set: 41: local dir=misc TEST_osd_pool_get_set: 42: ./ceph osd dump TEST_osd_pool_get_set: 42: grep 'pool 0' TEST_osd_pool_get_set: 42: grep hashpspool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0 TEST_osd_pool_get_set: 43: ./ceph osd pool set rbd hashpspool 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 hashpspool to 0 TEST_osd_pool_get_set: 44: ./ceph osd dump TEST_osd_pool_get_set: 44: grep 'pool 0' TEST_osd_pool_get_set: 44: grep hashpspool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_osd_pool_get_set: 45: ./ceph osd pool set rbd hashpspool 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 hashpspool to 1 TEST_osd_pool_get_set: 46: ./ceph osd dump TEST_osd_pool_get_set: 46: grep 'pool 0' TEST_osd_pool_get_set: 46: grep hashpspool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 3 flags hashpspool stripe_width 0 TEST_osd_pool_get_set: 47: ./ceph osd pool set rbd hashpspool false *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 hashpspool to false TEST_osd_pool_get_set: 48: ./ceph osd dump TEST_osd_pool_get_set: 48: grep 'pool 0' TEST_osd_pool_get_set: 48: grep hashpspool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_osd_pool_get_set: 49: ./ceph osd pool set rbd hashpspool false *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 hashpspool to false TEST_osd_pool_get_set: 51: ./ceph osd dump TEST_osd_pool_get_set: 51: grep 'pool 0' TEST_osd_pool_get_set: 51: grep hashpspool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_osd_pool_get_set: 52: ./ceph osd pool set rbd hashpspool true *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 hashpspool to true TEST_osd_pool_get_set: 53: ./ceph osd dump TEST_osd_pool_get_set: 53: grep 'pool 0' TEST_osd_pool_get_set: 53: grep hashpspool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 6 flags hashpspool stripe_width 0 run: 35: teardown misc teardown: 24: local dir=misc teardown: 25: kill_daemons misc kill_daemons: 60: local dir=misc kkill_daemons: 59: find misc kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat misc/a/pidfile kill_daemons: 62: pid=19709 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19709 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19709 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 19709 kill_daemons: 64: break teardown: 26: rm -fr misc main: 108: code=0 main: 112: teardown misc teardown: 24: local dir=misc teardown: 25: kill_daemons misc kill_daemons: 60: local dir=misc kkill_daemons: 59: find misc kkill_daemons: 59: grep pidfile find: `misc': No such file or directory teardown: 26: rm -fr misc main: 113: return 0 PASS: test/mon/misc.sh main: 105: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush main: 106: local code main: 107: run osd-crush run: 20: local dir=osd-crush run: 22: export CEPH_ARGS rrun: 23: uuidgen run: 23: CEPH_ARGS+='--fsid=74809249-bb6e-4705-9fb9-9542fbfbd930 --auth-supported=none ' run: 24: CEPH_ARGS+='--mon-host=127.0.0.1 ' rrun: 26: set rrun: 26: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' run: 26: FUNCTIONS='TEST_add_ruleset_failed TEST_crush_rule_create_erasure TEST_crush_rule_create_simple TEST_crush_rule_dump TEST_crush_rule_rm TEST_crush_ruleset_match_rule_when_creating' run: 27: for TEST_function in '$FUNCTIONS' run: 28: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush run: 29: run_mon osd-crush a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-crush run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-crush/a --run-dir=osd-crush/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 74809249-bb6e-4705-9fb9-9542fbfbd930 ./ceph-mon: created monfs at osd-crush/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-crush/a --log-file=osd-crush/a/log --mon-cluster-log-file=osd-crush/a/log --run-dir=osd-crush/a --pid-file=osd-crush/a/pidfile --public-addr 127.0.0.1 run: 30: TEST_add_ruleset_failed osd-crush TEST_add_ruleset_failed: 150: local dir=osd-crush TEST_add_ruleset_failed: 151: local root=host1 TEST_add_ruleset_failed: 153: ./ceph osd crush add-bucket host1 host *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added bucket host1 type host to crush map TEST_add_ruleset_failed: 154: ./ceph osd crush rule create-simple test_rule1 host1 osd firstn *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_add_ruleset_failed: 155: ./ceph osd crush rule create-simple test_rule2 host1 osd firstn *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_add_ruleset_failed: 156: ./ceph osd getcrushmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got crush map from osdmap epoch 4 TEST_add_ruleset_failed: 157: ./crushtool --decompile osd-crush/crushmap TTEST_add_ruleset_failed: 149: seq 3 255 TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 158: for i in '$(seq 3 255)' TEST_add_ruleset_failed: 160: cat TEST_add_ruleset_failed: 172: ./crushtool --compile osd-crush/crushmap.txt -o osd-crush/crushmap TEST_add_ruleset_failed: 173: ./ceph osd setcrushmap -i osd-crush/crushmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set crush map TEST_add_ruleset_failed: 174: ./ceph osd crush rule create-simple test_rule_nospace host1 osd firstn TEST_add_ruleset_failed: 174: grep 'Error ENOSPC' Error ENOSPC: failed to add rule 256 because (28) No space left on device run: 34: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-crush/a/pidfile kill_daemons: 62: pid=20249 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 20249 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 20249 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 20249 kill_daemons: 64: break teardown: 26: rm -fr osd-crush run: 27: for TEST_function in '$FUNCTIONS' run: 28: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush run: 29: run_mon osd-crush a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-crush run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-crush/a --run-dir=osd-crush/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 74809249-bb6e-4705-9fb9-9542fbfbd930 ./ceph-mon: created monfs at osd-crush/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-crush/a --log-file=osd-crush/a/log --mon-cluster-log-file=osd-crush/a/log --run-dir=osd-crush/a --pid-file=osd-crush/a/pidfile --public-addr 127.0.0.1 run: 30: TEST_crush_rule_create_erasure osd-crush TEST_crush_rule_create_erasure: 78: local dir=osd-crush TEST_crush_rule_create_erasure: 79: local ruleset=ruleset3 TEST_crush_rule_create_erasure: 83: ./ceph osd crush rule create-erasure ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** created ruleset ruleset3 at 1 TEST_crush_rule_create_erasure: 84: ./ceph osd crush rule create-erasure ruleset3 TEST_crush_rule_create_erasure: 85: grep 'ruleset3 already exists' rule ruleset3 already exists TEST_crush_rule_create_erasure: 86: ./ceph --format xml osd crush rule dump ruleset3 TEST_crush_rule_create_erasure: 87: egrep 'take[^<]+default' TEST_crush_rule_create_erasure: 88: grep 'chooseleaf_indep0host' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 1ruleset313320set_chooseleaf_tries5take-1defaultchooseleaf_indep0hostemit TEST_crush_rule_create_erasure: 89: ./ceph osd crush rule rm ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 90: ./ceph osd crush rule ls TEST_crush_rule_create_erasure: 90: grep ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 94: ./ceph osd crush rule create-erasure ruleset3 default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** created ruleset ruleset3 at 1 TEST_crush_rule_create_erasure: 95: ./ceph osd crush rule ls TEST_crush_rule_create_erasure: 95: grep ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "ruleset3"] TEST_crush_rule_create_erasure: 96: ./ceph osd crush rule rm ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 97: ./ceph osd crush rule ls TEST_crush_rule_create_erasure: 97: grep ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 101: ./ceph osd erasure-code-profile rm default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 102: ./ceph osd erasure-code-profile ls TEST_crush_rule_create_erasure: 102: grep default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 103: ./ceph osd crush rule create-erasure ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** created ruleset ruleset3 at 1 TEST_crush_rule_create_erasure: 104: CEPH_ARGS= TEST_crush_rule_create_erasure: 104: ./ceph --admin-daemon osd-crush/a/ceph-mon.a.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_crush_rule_create_erasure: 105: grep 'profile default set' osd-crush/a/log 2014-10-08 11:16:41.365040 2b9850499700 20 mon.a@0(leader).osd e6 erasure code profile default set TEST_crush_rule_create_erasure: 106: ./ceph osd erasure-code-profile ls TEST_crush_rule_create_erasure: 106: grep default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** default TEST_crush_rule_create_erasure: 107: ./ceph osd crush rule rm ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_erasure: 108: ./ceph osd crush rule ls TEST_crush_rule_create_erasure: 108: grep ruleset3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 34: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-crush/a/pidfile kill_daemons: 62: pid=20783 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 20783 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 20783 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 20783 kill_daemons: 64: break teardown: 26: rm -fr osd-crush run: 27: for TEST_function in '$FUNCTIONS' run: 28: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush run: 29: run_mon osd-crush a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-crush run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-crush/a --run-dir=osd-crush/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 74809249-bb6e-4705-9fb9-9542fbfbd930 ./ceph-mon: created monfs at osd-crush/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-crush/a --log-file=osd-crush/a/log --mon-cluster-log-file=osd-crush/a/log --run-dir=osd-crush/a --pid-file=osd-crush/a/pidfile --public-addr 127.0.0.1 run: 30: TEST_crush_rule_create_simple osd-crush TEST_crush_rule_create_simple: 39: local dir=osd-crush TEST_crush_rule_create_simple: 40: ./ceph --format xml osd crush rule dump replicated_ruleset TEST_crush_rule_create_simple: 41: egrep 'take[^<]+default' TEST_crush_rule_create_simple: 42: grep 'choose_firstn0osd' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 0replicated_ruleset01110take-1defaultchoose_firstn0osdemit TEST_crush_rule_create_simple: 43: local ruleset=ruleset0 TEST_crush_rule_create_simple: 44: local root=host1 TEST_crush_rule_create_simple: 45: ./ceph osd crush add-bucket host1 host *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added bucket host1 type host to crush map TEST_crush_rule_create_simple: 46: local failure_domain=osd TEST_crush_rule_create_simple: 47: ./ceph osd crush rule create-simple ruleset0 host1 osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_create_simple: 48: ./ceph osd crush rule create-simple ruleset0 host1 osd TEST_crush_rule_create_simple: 49: grep 'ruleset0 already exists' ruleset ruleset0 already exists TEST_crush_rule_create_simple: 50: ./ceph --format xml osd crush rule dump ruleset0 TEST_crush_rule_create_simple: 51: egrep 'take[^<]+host1' TEST_crush_rule_create_simple: 52: grep 'choose_firstn0osd' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 1ruleset011110take-2host1choose_firstn0osdemit TEST_crush_rule_create_simple: 53: ./ceph osd crush rule rm ruleset0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 34: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-crush/a/pidfile kill_daemons: 62: pid=21440 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21440 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21440 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21440 kill_daemons: 64: break teardown: 26: rm -fr osd-crush run: 27: for TEST_function in '$FUNCTIONS' run: 28: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush run: 29: run_mon osd-crush a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-crush run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-crush/a --run-dir=osd-crush/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 74809249-bb6e-4705-9fb9-9542fbfbd930 ./ceph-mon: created monfs at osd-crush/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-crush/a --log-file=osd-crush/a/log --mon-cluster-log-file=osd-crush/a/log --run-dir=osd-crush/a --pid-file=osd-crush/a/pidfile --public-addr 127.0.0.1 run: 30: TEST_crush_rule_dump osd-crush TEST_crush_rule_dump: 57: local dir=osd-crush TEST_crush_rule_dump: 58: local ruleset=ruleset1 TEST_crush_rule_dump: 59: ./ceph osd crush rule create-erasure ruleset1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** created ruleset ruleset1 at 1 TEST_crush_rule_dump: 60: local expected TEST_crush_rule_dump: 61: expected='ruleset1' TEST_crush_rule_dump: 62: ./ceph --format xml osd crush rule dump ruleset1 TEST_crush_rule_dump: 62: grep 'ruleset1' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 1ruleset113320set_chooseleaf_tries5take-1defaultchooseleaf_indep0hostemit TEST_crush_rule_dump: 63: expected='"rule_name": "ruleset1"' TEST_crush_rule_dump: 64: ./ceph osd crush rule dump TEST_crush_rule_dump: 64: grep '"rule_name": "ruleset1"' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "rule_name": "ruleset1", TEST_crush_rule_dump: 65: ./ceph osd crush rule dump non_existent_ruleset *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error ENOENT: unknown crush ruleset 'non_existent_ruleset' TEST_crush_rule_dump: 66: ./ceph osd crush rule rm ruleset1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 34: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-crush/a/pidfile kill_daemons: 62: pid=21717 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21717 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21717 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21717 kill_daemons: 64: break teardown: 26: rm -fr osd-crush run: 27: for TEST_function in '$FUNCTIONS' run: 28: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush run: 29: run_mon osd-crush a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-crush run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-crush/a --run-dir=osd-crush/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 74809249-bb6e-4705-9fb9-9542fbfbd930 ./ceph-mon: created monfs at osd-crush/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-crush/a --log-file=osd-crush/a/log --mon-cluster-log-file=osd-crush/a/log --run-dir=osd-crush/a --pid-file=osd-crush/a/pidfile --public-addr 127.0.0.1 run: 30: TEST_crush_rule_rm osd-crush TEST_crush_rule_rm: 70: local ruleset=erasure2 TEST_crush_rule_rm: 71: ./ceph osd crush rule create-erasure erasure2 default *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** created ruleset erasure2 at 1 TEST_crush_rule_rm: 72: ./ceph osd crush rule ls TEST_crush_rule_rm: 72: grep erasure2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "erasure2"] TEST_crush_rule_rm: 73: ./ceph osd crush rule rm erasure2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_rule_rm: 74: ./ceph osd crush rule ls TEST_crush_rule_rm: 74: grep erasure2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** run: 34: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-crush/a/pidfile kill_daemons: 62: pid=21951 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21951 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21951 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 21951 kill_daemons: 64: break teardown: 26: rm -fr osd-crush run: 27: for TEST_function in '$FUNCTIONS' run: 28: setup osd-crush setup: 18: local dir=osd-crush setup: 19: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush setup: 20: mkdir osd-crush run: 29: run_mon osd-crush a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-crush run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-crush/a --run-dir=osd-crush/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 74809249-bb6e-4705-9fb9-9542fbfbd930 ./ceph-mon: created monfs at osd-crush/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-crush/a --log-file=osd-crush/a/log --mon-cluster-log-file=osd-crush/a/log --run-dir=osd-crush/a --pid-file=osd-crush/a/pidfile --public-addr 127.0.0.1 run: 30: TEST_crush_ruleset_match_rule_when_creating osd-crush TEST_crush_ruleset_match_rule_when_creating: 137: local dir=osd-crush TEST_crush_ruleset_match_rule_when_creating: 138: local root=host1 TEST_crush_ruleset_match_rule_when_creating: 140: generate_manipulated_rules osd-crush generate_manipulated_rules: 119: local dir=osd-crush generate_manipulated_rules: 120: ./ceph osd crush add-bucket host1 host *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added bucket host1 type host to crush map generate_manipulated_rules: 121: ./ceph osd crush rule create-simple test_rule1 host1 osd firstn *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** generate_manipulated_rules: 122: ./ceph osd crush rule create-simple test_rule2 host1 osd firstn *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** generate_manipulated_rules: 123: ./ceph osd getcrushmap -o osd-crush/original_map *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got crush map from osdmap epoch 4 generate_manipulated_rules: 124: ./crushtool -d osd-crush/original_map -o osd-crush/decoded_original_map generate_manipulated_rules: 126: sed -i 's/ruleset 0/ruleset 3/' osd-crush/decoded_original_map generate_manipulated_rules: 127: sed -i 's/ruleset 2/ruleset 0/' osd-crush/decoded_original_map generate_manipulated_rules: 128: sed -i 's/ruleset 1/ruleset 2/' osd-crush/decoded_original_map generate_manipulated_rules: 130: ./crushtool -c osd-crush/decoded_original_map -o osd-crush/new_map generate_manipulated_rules: 131: ./ceph osd setcrushmap -i osd-crush/new_map *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set crush map generate_manipulated_rules: 133: ./ceph osd crush rule dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** [ { "rule_id": 0, "rule_name": "replicated_ruleset", "ruleset": 3, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -1, "item_name": "default"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}, { "rule_id": 1, "rule_name": "test_rule1", "ruleset": 2, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -2, "item_name": "host1"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}, { "rule_id": 2, "rule_name": "test_rule2", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -2, "item_name": "host1"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}] TEST_crush_ruleset_match_rule_when_creating: 142: ./ceph osd crush rule create-simple special_rule_simple host1 osd firstn *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** TEST_crush_ruleset_match_rule_when_creating: 144: ./ceph osd crush rule dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** [ { "rule_id": 0, "rule_name": "replicated_ruleset", "ruleset": 3, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -1, "item_name": "default"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}, { "rule_id": 1, "rule_name": "test_rule1", "ruleset": 2, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -2, "item_name": "host1"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}, { "rule_id": 2, "rule_name": "test_rule2", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -2, "item_name": "host1"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}, { "rule_id": 3, "rule_name": "special_rule_simple", "ruleset": 3, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -2, "item_name": "host1"}, { "op": "choose_firstn", "num": 0, "type": "osd"}, { "op": "emit"}]}] TEST_crush_ruleset_match_rule_when_creating: 146: check_ruleset_id_match_rule_id special_rule_simple check_ruleset_id_match_rule_id: 112: local rule_name=special_rule_simple ccheck_ruleset_id_match_rule_id: 113: ./ceph osd crush rule dump special_rule_simple ccheck_ruleset_id_match_rule_id: 113: grep '"rule_id":' ccheck_ruleset_id_match_rule_id: 113: awk -F ':|,' '{print int($2)}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** check_ruleset_id_match_rule_id: 113: rule_id=3 ccheck_ruleset_id_match_rule_id: 114: ./ceph osd crush rule dump special_rule_simple ccheck_ruleset_id_match_rule_id: 114: grep '"ruleset":' ccheck_ruleset_id_match_rule_id: 114: awk -F ':|,' '{print int($2)}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** check_ruleset_id_match_rule_id: 114: ruleset_id=3 check_ruleset_id_match_rule_id: 115: test 3 = 3 run: 34: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-crush/a/pidfile kill_daemons: 62: pid=22144 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 22144 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 22144 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 22144 kill_daemons: 64: break teardown: 26: rm -fr osd-crush main: 108: code=0 main: 112: teardown osd-crush teardown: 24: local dir=osd-crush teardown: 25: kill_daemons osd-crush kill_daemons: 60: local dir=osd-crush kkill_daemons: 59: find osd-crush kkill_daemons: 59: grep pidfile find: `osd-crush': No such file or directory teardown: 26: rm -fr osd-crush main: 113: return 0 PASS: test/mon/osd-crush.sh main: 105: setup osd-erasure-code-profile setup: 18: local dir=osd-erasure-code-profile setup: 19: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile find: `osd-erasure-code-profile': No such file or directory teardown: 26: rm -fr osd-erasure-code-profile setup: 20: mkdir osd-erasure-code-profile main: 106: local code main: 107: run osd-erasure-code-profile run: 20: local dir=osd-erasure-code-profile run: 22: export CEPH_ARGS rrun: 23: uuidgen run: 23: CEPH_ARGS+='--fsid=d56f579f-b77a-4be9-955a-fac67768d30d --auth-supported=none ' run: 24: CEPH_ARGS+='--mon-host=127.0.0.1 ' run: 26: local id=a run: 27: call_TEST_functions osd-erasure-code-profile a --public-addr 127.0.0.1 call_TEST_functions: 71: local dir=osd-erasure-code-profile call_TEST_functions: 72: shift call_TEST_functions: 73: local id=--public-addr call_TEST_functions: 74: shift call_TEST_functions: 76: setup osd-erasure-code-profile setup: 18: local dir=osd-erasure-code-profile setup: 19: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile teardown: 26: rm -fr osd-erasure-code-profile setup: 20: mkdir osd-erasure-code-profile call_TEST_functions: 77: run_mon osd-erasure-code-profile --public-addr --public-addr 127.0.0.1 run_mon: 30: local dir=osd-erasure-code-profile run_mon: 31: shift run_mon: 32: local id=--public-addr run_mon: 33: shift run_mon: 34: dir+=/--public-addr run_mon: 37: ./ceph-mon --id --public-addr --mkfs --mon-data=osd-erasure-code-profile/--public-addr --run-dir=osd-erasure-code-profile/--public-addr --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.--public-addr ./ceph-mon: set fsid to d56f579f-b77a-4be9-955a-fac67768d30d ./ceph-mon: created monfs at osd-erasure-code-profile/--public-addr for mon.--public-addr run_mon: 43: ./ceph-mon --id --public-addr --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-erasure-code-profile/--public-addr --log-file=osd-erasure-code-profile/--public-addr/log --mon-cluster-log-file=osd-erasure-code-profile/--public-addr/log --run-dir=osd-erasure-code-profile/--public-addr --pid-file=osd-erasure-code-profile/--public-addr/pidfile --public-addr 127.0.0.1 ccall_TEST_functions: 78: set ccall_TEST_functions: 78: sed -n -e 's/^\(SHARE_MON_TEST_[0-9a-z_]*\) .*/\1/p' call_TEST_functions: 78: SHARE_MON_FUNCTIONS='SHARE_MON_TEST_get SHARE_MON_TEST_ls SHARE_MON_TEST_rm SHARE_MON_TEST_set' call_TEST_functions: 79: for TEST_function in '$SHARE_MON_FUNCTIONS' call_TEST_functions: 80: SHARE_MON_TEST_get osd-erasure-code-profile --public-addr SHARE_MON_TEST_get: 98: local dir=osd-erasure-code-profile SHARE_MON_TEST_get: 99: local id=--public-addr SHARE_MON_TEST_get: 101: local default_profile=default SHARE_MON_TEST_get: 102: ./ceph osd erasure-code-profile get default SHARE_MON_TEST_get: 103: grep plugin=jerasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** plugin=jerasure SHARE_MON_TEST_get: 104: ./ceph --format xml osd erasure-code-profile get default SHARE_MON_TEST_get: 105: grep 'jerasure' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** .libs21jerasurereed_sol_van SHARE_MON_TEST_get: 106: ./ceph osd erasure-code-profile get WRONG SHARE_MON_TEST_get: 107: grep -q 'unknown erasure code profile '\''WRONG'\''' osd-erasure-code-profile/out call_TEST_functions: 79: for TEST_function in '$SHARE_MON_FUNCTIONS' call_TEST_functions: 80: SHARE_MON_TEST_ls osd-erasure-code-profile --public-addr SHARE_MON_TEST_ls: 62: local dir=osd-erasure-code-profile SHARE_MON_TEST_ls: 63: local id=--public-addr SHARE_MON_TEST_ls: 65: local profile=myprofile SHARE_MON_TEST_ls: 66: ./ceph osd erasure-code-profile ls SHARE_MON_TEST_ls: 66: grep myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_ls: 67: ./ceph osd erasure-code-profile set myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_ls: 68: ./ceph osd erasure-code-profile ls SHARE_MON_TEST_ls: 68: grep myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** myprofile SHARE_MON_TEST_ls: 69: ./ceph --format xml osd erasure-code-profile ls SHARE_MON_TEST_ls: 70: grep 'myprofile' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** defaultmyprofile SHARE_MON_TEST_ls: 72: ./ceph osd erasure-code-profile rm myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** call_TEST_functions: 79: for TEST_function in '$SHARE_MON_FUNCTIONS' call_TEST_functions: 80: SHARE_MON_TEST_rm osd-erasure-code-profile --public-addr SHARE_MON_TEST_rm: 76: local dir=osd-erasure-code-profile SHARE_MON_TEST_rm: 77: local id=--public-addr SHARE_MON_TEST_rm: 79: local profile=myprofile SHARE_MON_TEST_rm: 80: ./ceph osd erasure-code-profile set myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_rm: 81: ./ceph osd erasure-code-profile ls SHARE_MON_TEST_rm: 81: grep myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** myprofile SHARE_MON_TEST_rm: 82: ./ceph osd erasure-code-profile rm myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_rm: 83: ./ceph osd erasure-code-profile ls SHARE_MON_TEST_rm: 83: grep myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_rm: 84: ./ceph osd erasure-code-profile rm WRONG SHARE_MON_TEST_rm: 85: grep 'WRONG does not exist' erasure-code-profile WRONG does not exist SHARE_MON_TEST_rm: 87: ./ceph osd erasure-code-profile set myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_rm: 88: ./ceph osd pool create poolname 12 12 erasure myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'poolname' created SHARE_MON_TEST_rm: 89: ./ceph osd erasure-code-profile rm myprofile SHARE_MON_TEST_rm: 90: grep 'poolname.*using.*myprofile' osd-erasure-code-profile/out Error EBUSY: poolname pool(s) are using the erasure code profile 'myprofile' SHARE_MON_TEST_rm: 91: ./ceph osd pool delete poolname poolname --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'poolname' removed SHARE_MON_TEST_rm: 92: ./ceph osd erasure-code-profile rm myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_rm: 94: ./ceph osd erasure-code-profile rm myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** erasure-code-profile myprofile does not exist call_TEST_functions: 79: for TEST_function in '$SHARE_MON_FUNCTIONS' call_TEST_functions: 80: SHARE_MON_TEST_set osd-erasure-code-profile --public-addr SHARE_MON_TEST_set: 31: local dir=osd-erasure-code-profile SHARE_MON_TEST_set: 32: local id=--public-addr SHARE_MON_TEST_set: 34: local profile=myprofile SHARE_MON_TEST_set: 38: ./ceph osd erasure-code-profile set myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_set: 39: ./ceph osd erasure-code-profile get myprofile SHARE_MON_TEST_set: 40: grep plugin=jerasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** plugin=jerasure SHARE_MON_TEST_set: 41: ./ceph osd erasure-code-profile rm myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_set: 45: ./ceph osd erasure-code-profile set myprofile key=value plugin=example *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_set: 47: ./ceph osd erasure-code-profile get myprofile SHARE_MON_TEST_set: 48: grep -e key=value -e plugin=example *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** key=value plugin=example SHARE_MON_TEST_set: 52: ./ceph osd erasure-code-profile set myprofile SHARE_MON_TEST_set: 53: grep 'will not override' osd-erasure-code-profile/out Error EPERM: will not override erasure code profile myprofile SHARE_MON_TEST_set: 54: ./ceph osd erasure-code-profile set myprofile key=other --force *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** SHARE_MON_TEST_set: 55: ./ceph osd erasure-code-profile get myprofile SHARE_MON_TEST_set: 56: grep key=other *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** key=other SHARE_MON_TEST_set: 58: ./ceph osd erasure-code-profile rm myprofile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** call_TEST_functions: 85: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-erasure-code-profile/--public-addr/pidfile kill_daemons: 62: pid=22608 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 22608 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 22608 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 22608 kill_daemons: 64: break teardown: 26: rm -fr osd-erasure-code-profile ccall_TEST_functions: 87: set ccall_TEST_functions: 87: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' call_TEST_functions: 87: FUNCTIONS='TEST_format_invalid TEST_format_json TEST_format_plain' call_TEST_functions: 88: for TEST_function in '$FUNCTIONS' call_TEST_functions: 89: setup osd-erasure-code-profile setup: 18: local dir=osd-erasure-code-profile setup: 19: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile find: `osd-erasure-code-profile': No such file or directory teardown: 26: rm -fr osd-erasure-code-profile setup: 20: mkdir osd-erasure-code-profile call_TEST_functions: 90: TEST_format_invalid osd-erasure-code-profile TEST_format_invalid: 111: local dir=osd-erasure-code-profile TEST_format_invalid: 113: local profile=profile TEST_format_invalid: 116: run_mon osd-erasure-code-profile a --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile 1 run_mon: 30: local dir=osd-erasure-code-profile run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-erasure-code-profile/a --run-dir=osd-erasure-code-profile/a --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile 1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to d56f579f-b77a-4be9-955a-fac67768d30d ./ceph-mon: created monfs at osd-erasure-code-profile/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-erasure-code-profile/a --log-file=osd-erasure-code-profile/a/log --mon-cluster-log-file=osd-erasure-code-profile/a/log --run-dir=osd-erasure-code-profile/a --pid-file=osd-erasure-code-profile/a/pidfile --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile 1 TEST_format_invalid: 118: ./ceph osd erasure-code-profile set profile TEST_format_invalid: 119: cat osd-erasure-code-profile/out *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: 1 must be a JSON object but is of type 4 instead TEST_format_invalid: 120: grep 'must be a JSON object' osd-erasure-code-profile/out Error EINVAL: 1 must be a JSON object but is of type 4 instead call_TEST_functions: 91: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-erasure-code-profile/a/pidfile kill_daemons: 62: pid=23796 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23796 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23796 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23796 kill_daemons: 64: break teardown: 26: rm -fr osd-erasure-code-profile call_TEST_functions: 88: for TEST_function in '$FUNCTIONS' call_TEST_functions: 89: setup osd-erasure-code-profile setup: 18: local dir=osd-erasure-code-profile setup: 19: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile find: `osd-erasure-code-profile': No such file or directory teardown: 26: rm -fr osd-erasure-code-profile setup: 20: mkdir osd-erasure-code-profile call_TEST_functions: 90: TEST_format_json osd-erasure-code-profile TEST_format_json: 124: local dir=osd-erasure-code-profile TEST_format_json: 127: expected='"plugin":"example"' TEST_format_json: 128: run_mon osd-erasure-code-profile a --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile '{"plugin":"example"}' run_mon: 30: local dir=osd-erasure-code-profile run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-erasure-code-profile/a --run-dir=osd-erasure-code-profile/a --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile '{"plugin":"example"}' ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to d56f579f-b77a-4be9-955a-fac67768d30d ./ceph-mon: created monfs at osd-erasure-code-profile/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-erasure-code-profile/a --log-file=osd-erasure-code-profile/a/log --mon-cluster-log-file=osd-erasure-code-profile/a/log --run-dir=osd-erasure-code-profile/a --pid-file=osd-erasure-code-profile/a/pidfile --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile '{"plugin":"example"}' TEST_format_json: 130: ./ceph --format json osd erasure-code-profile get default TEST_format_json: 131: grep '"plugin":"example"' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"directory":".libs","plugin":"example"} call_TEST_functions: 91: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-erasure-code-profile/a/pidfile kill_daemons: 62: pid=23867 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23867 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23867 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23867 kill_daemons: 64: break teardown: 26: rm -fr osd-erasure-code-profile call_TEST_functions: 88: for TEST_function in '$FUNCTIONS' call_TEST_functions: 89: setup osd-erasure-code-profile setup: 18: local dir=osd-erasure-code-profile setup: 19: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile find: `osd-erasure-code-profile': No such file or directory teardown: 26: rm -fr osd-erasure-code-profile setup: 20: mkdir osd-erasure-code-profile call_TEST_functions: 90: TEST_format_plain osd-erasure-code-profile TEST_format_plain: 135: local dir=osd-erasure-code-profile TEST_format_plain: 138: expected='"plugin":"example"' TEST_format_plain: 139: run_mon osd-erasure-code-profile a --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile plugin=example run_mon: 30: local dir=osd-erasure-code-profile run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-erasure-code-profile/a --run-dir=osd-erasure-code-profile/a --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile plugin=example ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to d56f579f-b77a-4be9-955a-fac67768d30d ./ceph-mon: created monfs at osd-erasure-code-profile/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-erasure-code-profile/a --log-file=osd-erasure-code-profile/a/log --mon-cluster-log-file=osd-erasure-code-profile/a/log --run-dir=osd-erasure-code-profile/a --pid-file=osd-erasure-code-profile/a/pidfile --public-addr 127.0.0.1 --osd_pool_default_erasure-code-profile plugin=example TEST_format_plain: 141: ./ceph --format json osd erasure-code-profile get default TEST_format_plain: 142: grep '"plugin":"example"' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"directory":".libs","plugin":"example"} call_TEST_functions: 91: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-erasure-code-profile/a/pidfile kill_daemons: 62: pid=23937 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23937 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23937 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 23937 kill_daemons: 64: break teardown: 26: rm -fr osd-erasure-code-profile main: 108: code=0 main: 112: teardown osd-erasure-code-profile teardown: 24: local dir=osd-erasure-code-profile teardown: 25: kill_daemons osd-erasure-code-profile kill_daemons: 60: local dir=osd-erasure-code-profile kkill_daemons: 59: find osd-erasure-code-profile kkill_daemons: 59: grep pidfile find: `osd-erasure-code-profile': No such file or directory teardown: 26: rm -fr osd-erasure-code-profile main: 113: return 0 PASS: test/mon/osd-erasure-code-profile.sh + PS4='${FUNCNAME[0]}: $LINENO: ' : 20: DIR=mkfs : 21: export CEPH_CONF=/dev/null : 21: CEPH_CONF=/dev/null : 22: unset CEPH_ARGS : 23: MON_ID=a : 24: MON_DIR=mkfs/a : 25: PORT=7451 : 26: MONA=127.0.0.1:7451 : 27: TIMEOUT=360 : 179: run run: 166: local actions run: 167: actions+='makedir ' run: 168: actions+='idempotent ' run: 169: actions+='auth_cephx_key ' run: 170: actions+='auth_cephx_keyring ' run: 171: actions+='auth_none ' run: 172: for action in '$actions' run: 173: setup setup: 30: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile find: `mkfs': No such file or directory teardown: 36: rm -fr mkfs setup: 31: mkdir mkfs run: 174: makedir makedir: 142: local toodeep=mkfs/a/toodeep makedir: 146: ./ceph-mon --id a --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a/toodeep makedir: 149: tee mkfs/makedir.log mkdir(mkfs/a/toodeep) : (2) No such file or directory makedir: 150: grep 'toodeep.*No such file' mkfs/makedir.log makedir: 151: rm mkfs/makedir.log makedir: 154: mkdir mkfs/a makedir: 155: mon_mkfs --auth-supported=none makedir: 155: tee mkfs/makedir.log mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=29c98e95-50dd-4a8f-ab26-87f958b0b7db mon_mkfs: 43: ./ceph-mon --id a --fsid 29c98e95-50dd-4a8f-ab26-87f958b0b7db --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 --auth-supported=none ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 29c98e95-50dd-4a8f-ab26-87f958b0b7db ./ceph-mon: created monfs at mkfs/a for mon.a makedir: 156: grep 'mkfs/a already exists' mkfs/makedir.log run: 175: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile teardown: 36: rm -fr mkfs run: 172: for action in '$actions' run: 173: setup setup: 30: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile find: `mkfs': No such file or directory teardown: 36: rm -fr mkfs setup: 31: mkdir mkfs run: 174: idempotent idempotent: 160: mon_mkfs --auth-supported=none mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=d638c73d-9e44-4a23-8142-5d5cfd68e6e7 mon_mkfs: 43: ./ceph-mon --id a --fsid d638c73d-9e44-4a23-8142-5d5cfd68e6e7 --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 --auth-supported=none ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to d638c73d-9e44-4a23-8142-5d5cfd68e6e7 ./ceph-mon: created monfs at mkfs/a for mon.a idempotent: 161: mon_mkfs --auth-supported=none idempotent: 161: tee mkfs/makedir.log mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=482957f5-5ad5-484a-affb-92ef9aaa0553 mon_mkfs: 43: ./ceph-mon --id a --fsid 482957f5-5ad5-484a-affb-92ef9aaa0553 --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 --auth-supported=none 'mkfs/a' already exists and is not empty: monitor may already exist idempotent: 162: grep ''\''mkfs/a'\'' already exists' mkfs/makedir.log run: 175: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile teardown: 36: rm -fr mkfs run: 172: for action in '$actions' run: 173: setup setup: 30: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile find: `mkfs': No such file or directory teardown: 36: rm -fr mkfs setup: 31: mkdir mkfs run: 174: auth_cephx_key auth_cephx_key: 115: '[' -f /etc/ceph/keyring ']' aauth_cephx_key: 120: ./ceph-authtool --gen-print-key auth_cephx_key: 120: local key=AQA4HTVUuBXBJRAAmpH3hP8RQtDMqRihhIExww== auth_cephx_key: 122: mon_mkfs '--key=corrupted key' mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=2405a0f6-1a44-4eef-8a35-3c198f00eefe mon_mkfs: 43: ./ceph-mon --id a --fsid 2405a0f6-1a44-4eef-8a35-3c198f00eefe --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 '--key=corrupted key' ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2405a0f6-1a44-4eef-8a35-3c198f00eefe 2014-10-08 11:17:12.667221 2b259be0bf40 -1 mon.a@-1(probing) e0 unable to find a keyring file on /etc/ceph/ceph.mon.a.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin ./ceph-mon: error creating monfs: (22) Invalid argument 2014-10-08 11:17:12.667462 2b259be0bf40 -1 mon.a@-1(probing) e0 error decoding keyring [mon.] key = corrupted key caps mon = "allow *" : buffer::malformed_input: error setting modifier for [mon.] type=key val=corrupted key auth_cephx_key: 125: rm -fr mkfs/a/store.db auth_cephx_key: 128: mon_mkfs --key=AQA4HTVUuBXBJRAAmpH3hP8RQtDMqRihhIExww== mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=c8d4873e-4909-47bf-8091-ce94d0799fc2 mon_mkfs: 43: ./ceph-mon --id a --fsid c8d4873e-4909-47bf-8091-ce94d0799fc2 --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 --key=AQA4HTVUuBXBJRAAmpH3hP8RQtDMqRihhIExww== ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to c8d4873e-4909-47bf-8091-ce94d0799fc2 2014-10-08 11:17:12.697199 2b2f47adbf40 -1 mon.a@-1(probing) e0 unable to find a keyring file on /etc/ceph/ceph.mon.a.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin ./ceph-mon: created monfs at mkfs/a for mon.a auth_cephx_key: 130: '[' -f mkfs/a/keyring ']' auth_cephx_key: 131: grep AQA4HTVUuBXBJRAAmpH3hP8RQtDMqRihhIExww== mkfs/a/keyring key = AQA4HTVUuBXBJRAAmpH3hP8RQtDMqRihhIExww== auth_cephx_key: 133: mon_run mon_run: 55: ./ceph-mon --id a --chdir= --osd-pool-default-erasure-code-directory=.libs --mon-data=mkfs/a --log-file=mkfs/a/log --mon-cluster-log-file=mkfs/a/log --run-dir=mkfs/a --pid-file=mkfs/a/pidfile --public-addr 127.0.0.1:7451 auth_cephx_key: 135: timeout 360 ./ceph --name mon. --keyring mkfs/a/keyring --mon-host 127.0.0.1:7451 mon stat *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** e1: 1 mons at {a=127.0.0.1:7451/0}, election epoch 2, quorum 0 a run: 175: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile kill_daemons: 68: for pidfile in '$(find $DIR -name pidfile)' kkill_daemons: 69: cat mkfs/a/pidfile kill_daemons: 69: pid=24055 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24055 kill_daemons: 72: sleep 0 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24055 kill_daemons: 72: sleep 1 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24055 ./test/mon/mkfs.sh: line 71: kill: (24055) - No such process kill_daemons: 71: break teardown: 36: rm -fr mkfs run: 172: for action in '$actions' run: 173: setup setup: 30: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile find: `mkfs': No such file or directory teardown: 36: rm -fr mkfs setup: 31: mkdir mkfs run: 174: auth_cephx_keyring auth_cephx_keyring: 96: cat auth_cephx_keyring: 102: mon_mkfs --keyring=mkfs/keyring mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=582bf7cd-f96f-4fd7-8fe9-e2ca279f1b4e mon_mkfs: 43: ./ceph-mon --id a --fsid 582bf7cd-f96f-4fd7-8fe9-e2ca279f1b4e --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 --keyring=mkfs/keyring ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 582bf7cd-f96f-4fd7-8fe9-e2ca279f1b4e ./ceph-mon: created monfs at mkfs/a for mon.a auth_cephx_keyring: 104: '[' -f mkfs/a/keyring ']' auth_cephx_keyring: 106: mon_run mon_run: 55: ./ceph-mon --id a --chdir= --osd-pool-default-erasure-code-directory=.libs --mon-data=mkfs/a --log-file=mkfs/a/log --mon-cluster-log-file=mkfs/a/log --run-dir=mkfs/a --pid-file=mkfs/a/pidfile --public-addr 127.0.0.1:7451 auth_cephx_keyring: 108: timeout 360 ./ceph --name mon. --keyring mkfs/a/keyring --mon-host 127.0.0.1:7451 mon stat *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** e1: 1 mons at {a=127.0.0.1:7451/0}, election epoch 2, quorum 0 a run: 175: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile kill_daemons: 68: for pidfile in '$(find $DIR -name pidfile)' kkill_daemons: 69: cat mkfs/a/pidfile kill_daemons: 69: pid=24123 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24123 kill_daemons: 72: sleep 0 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24123 kill_daemons: 72: sleep 1 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24123 ./test/mon/mkfs.sh: line 71: kill: (24123) - No such process kill_daemons: 71: break teardown: 36: rm -fr mkfs run: 172: for action in '$actions' run: 173: setup setup: 30: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile find: `mkfs': No such file or directory teardown: 36: rm -fr mkfs setup: 31: mkdir mkfs run: 174: auth_none auth_none: 78: mon_mkfs --auth-supported=none mmon_mkfs: 40: uuidgen mon_mkfs: 40: local fsid=5fdff325-11d0-4c8a-bdd7-847827bf6636 mon_mkfs: 43: ./ceph-mon --id a --fsid 5fdff325-11d0-4c8a-bdd7-847827bf6636 --osd-pool-default-erasure-code-directory=.libs --mkfs --mon-data=mkfs/a --mon-initial-members=a --mon-host=127.0.0.1:7451 --auth-supported=none ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 5fdff325-11d0-4c8a-bdd7-847827bf6636 ./ceph-mon: created monfs at mkfs/a for mon.a auth_none: 81: ./ceph-mon --id a --osd-pool-default-erasure-code-directory=.libs --mon-data=mkfs/a --extract-monmap mkfs/a/monmap 2014-10-08 11:17:15.676706 2ba6d3b20f40 -1 wrote monmap to mkfs/a/monmap auth_none: 86: '[' -f mkfs/a/monmap ']' auth_none: 88: '[' '!' -f mkfs/a/keyring ']' auth_none: 90: mon_run --auth-supported=none mon_run: 55: ./ceph-mon --id a --chdir= --osd-pool-default-erasure-code-directory=.libs --mon-data=mkfs/a --log-file=mkfs/a/log --mon-cluster-log-file=mkfs/a/log --run-dir=mkfs/a --pid-file=mkfs/a/pidfile --public-addr 127.0.0.1:7451 --auth-supported=none auth_none: 92: timeout 360 ./ceph --mon-host 127.0.0.1:7451 mon stat *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** e1: 1 mons at {a=127.0.0.1:7451/0}, election epoch 2, quorum 0 a run: 175: teardown teardown: 35: kill_daemons kkill_daemons: 67: find mkfs -name pidfile kill_daemons: 68: for pidfile in '$(find $DIR -name pidfile)' kkill_daemons: 69: cat mkfs/a/pidfile kill_daemons: 69: pid=24193 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24193 kill_daemons: 72: sleep 0 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24193 kill_daemons: 72: sleep 1 kill_daemons: 70: for try in 0 1 1 1 2 3 kill_daemons: 71: kill 24193 ./test/mon/mkfs.sh: line 71: kill: (24193) - No such process kill_daemons: 71: break teardown: 36: rm -fr mkfs PASS: test/mon/mkfs.sh main: 105: setup osd-config setup: 18: local dir=osd-config setup: 19: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile find: `osd-config': No such file or directory teardown: 26: rm -fr osd-config setup: 20: mkdir osd-config main: 106: local code main: 107: run osd-config run: 22: local dir=osd-config run: 24: export CEPH_ARGS rrun: 25: uuidgen run: 25: CEPH_ARGS+='--fsid=2e2c67fa-66fe-426e-b43e-3127e6d110ae --auth-supported=none ' run: 26: CEPH_ARGS+='--mon-host=127.0.0.1 ' run: 28: local id=a run: 29: call_TEST_functions osd-config a --public-addr 127.0.0.1 call_TEST_functions: 71: local dir=osd-config call_TEST_functions: 72: shift call_TEST_functions: 73: local id=--public-addr call_TEST_functions: 74: shift call_TEST_functions: 76: setup osd-config setup: 18: local dir=osd-config setup: 19: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile teardown: 26: rm -fr osd-config setup: 20: mkdir osd-config call_TEST_functions: 77: run_mon osd-config --public-addr --public-addr 127.0.0.1 run_mon: 30: local dir=osd-config run_mon: 31: shift run_mon: 32: local id=--public-addr run_mon: 33: shift run_mon: 34: dir+=/--public-addr run_mon: 37: ./ceph-mon --id --public-addr --mkfs --mon-data=osd-config/--public-addr --run-dir=osd-config/--public-addr --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.--public-addr ./ceph-mon: set fsid to 2e2c67fa-66fe-426e-b43e-3127e6d110ae ./ceph-mon: created monfs at osd-config/--public-addr for mon.--public-addr run_mon: 43: ./ceph-mon --id --public-addr --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-config/--public-addr --log-file=osd-config/--public-addr/log --mon-cluster-log-file=osd-config/--public-addr/log --run-dir=osd-config/--public-addr --pid-file=osd-config/--public-addr/pidfile --public-addr 127.0.0.1 ccall_TEST_functions: 78: set ccall_TEST_functions: 78: sed -n -e 's/^\(SHARE_MON_TEST_[0-9a-z_]*\) .*/\1/p' call_TEST_functions: 78: SHARE_MON_FUNCTIONS= call_TEST_functions: 85: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-config/--public-addr/pidfile kill_daemons: 62: pid=24270 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24270 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24270 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24270 kill_daemons: 64: break teardown: 26: rm -fr osd-config ccall_TEST_functions: 87: set ccall_TEST_functions: 87: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' call_TEST_functions: 87: FUNCTIONS='TEST_config_init TEST_config_track' call_TEST_functions: 88: for TEST_function in '$FUNCTIONS' call_TEST_functions: 89: setup osd-config setup: 18: local dir=osd-config setup: 19: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile find: `osd-config': No such file or directory teardown: 26: rm -fr osd-config setup: 20: mkdir osd-config call_TEST_functions: 90: TEST_config_init osd-config TEST_config_init: 33: local dir=osd-config TEST_config_init: 35: run_mon osd-config a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-config run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-config/a --run-dir=osd-config/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 2e2c67fa-66fe-426e-b43e-3127e6d110ae ./ceph-mon: created monfs at osd-config/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-config/a --log-file=osd-config/a/log --mon-cluster-log-file=osd-config/a/log --run-dir=osd-config/a --pid-file=osd-config/a/pidfile --public-addr 127.0.0.1 TEST_config_init: 37: local advance=1000 TEST_config_init: 38: local stale=1000 TEST_config_init: 39: local cache=500 TEST_config_init: 40: run_osd osd-config 0 --osd-map-max-advance 1000 --osd-map-cache-size 500 --osd-pg-epoch-persisted-max-stale 1000 run_osd: 19: local dir=osd-config run_osd: 20: shift run_osd: 21: local id=0 run_osd: 22: shift run_osd: 23: local osd_data=osd-config/0 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=osd-config' run_osd: 27: ceph_disk_args+=' --sysconfdir=osd-config' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch osd-config/ceph.conf run_osd: 33: mkdir -p osd-config/0 run_osd: 34: ./ceph-disk --statedir=osd-config --sysconfdir=osd-config --prepend-to-path= --verbose prepare osd-config/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir osd-config/0 run_osd: 37: local 'ceph_args=--fsid=2e2c67fa-66fe-426e-b43e-3127e6d110ae --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=osd-config/0' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=osd-config' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=osd-config/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=osd-config/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+='--osd-map-max-advance 1000 --osd-map-cache-size 500 --osd-pg-epoch-persisted-max-stale 1000' run_osd: 48: mkdir -p osd-config/0 run_osd: 49: CEPH_ARGS='--fsid=2e2c67fa-66fe-426e-b43e-3127e6d110ae --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=osd-config/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=osd-config --debug-osd=20 --log-file=osd-config/osd-$id.log --pid-file=osd-config/osd-$id.pidfile --osd-map-max-advance 1000 --osd-map-cache-size 500 --osd-pg-epoch-persisted-max-stale 1000' run_osd: 49: ./ceph-disk --statedir=osd-config --sysconfdir=osd-config --prepend-to-path= --verbose activate --mark-init=none osd-config/0 DEBUG:ceph-disk:Cluster uuid is 2e2c67fa-66fe-426e-b43e-3127e6d110ae INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 8af41099-91df-4fc7-81ef-911de69f1011 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-config/bootstrap-osd/ceph.keyring osd create --concise 8af41099-91df-4fc7-81ef-911de69f1011 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-config/bootstrap-osd/ceph.keyring mon getmap -o osd-config/0/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap osd-config/0/activate.monmap --osd-data osd-config/0 --osd-journal osd-config/0/journal --osd-uuid 8af41099-91df-4fc7-81ef-911de69f1011 --keyring osd-config/0/keyring 2014-10-08 11:17:19.385065 2b1abce3dbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:19.806039 2b1abce3dbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:19.806599 2b1abce3dbc0 -1 filestore(osd-config/0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:17:20.099619 2b1abce3dbc0 -1 created object store osd-config/0 journal osd-config/0/journal for osd.0 fsid 2e2c67fa-66fe-426e-b43e-3127e6d110ae 2014-10-08 11:17:20.099701 2b1abce3dbc0 -1 auth: error reading file: osd-config/0/keyring: can't open osd-config/0/keyring: (2) No such file or directory 2014-10-08 11:17:20.099864 2b1abce3dbc0 -1 created new key in keyring osd-config/0/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-config/bootstrap-osd/ceph.keyring auth add osd.0 -i osd-config/0/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at osd-config/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=osd-config/0 --osd-journal=osd-config/0/journal starting osd.0 at :/0 osd_data osd-config/0 osd-config/0/journal rrun_osd: 54: cat osd-config/0/whoami run_osd: 54: '[' 0 = 0 ']' run_osd: 56: ./ceph osd crush create-or-move 0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 3 up_thru 0 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/24486 127.0.0.1:6801/24486 127.0.0.1:6802/24486 127.0.0.1:6803/24486 exists,up 8af41099-91df-4fc7-81ef-911de69f1011 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 TEST_config_init: 45: CEPH_ARGS= TEST_config_init: 45: ./ceph --admin-daemon osd-config/ceph-osd.0.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_config_init: 46: grep 'is not > osd_map_max_advance' osd-config/osd-0.log 2014-10-08 11:17:20.679395 2af67ee4cbc0 0 log_channel(default) log [WRN] : osd_map_cache_size (500) is not > osd_map_max_advance (1000) TEST_config_init: 47: grep 'is not > osd_pg_epoch_persisted_max_stale' osd-config/osd-0.log 2014-10-08 11:17:20.679407 2af67ee4cbc0 0 log_channel(default) log [WRN] : osd_map_cache_size (500) is not > osd_pg_epoch_persisted_max_stale (1000) call_TEST_functions: 91: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-config/a/pidfile kill_daemons: 62: pid=24304 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24304 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24304 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24304 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-config/osd-0.pidfile kill_daemons: 62: pid=24488 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24488 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24488 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24488 kill_daemons: 64: break teardown: 26: rm -fr osd-config call_TEST_functions: 88: for TEST_function in '$FUNCTIONS' call_TEST_functions: 89: setup osd-config setup: 18: local dir=osd-config setup: 19: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile find: `osd-config': No such file or directory teardown: 26: rm -fr osd-config setup: 20: mkdir osd-config call_TEST_functions: 90: TEST_config_track osd-config TEST_config_track: 51: local dir=osd-config TEST_config_track: 53: run_mon osd-config a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-config run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-config/a --run-dir=osd-config/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to 2e2c67fa-66fe-426e-b43e-3127e6d110ae ./ceph-mon: created monfs at osd-config/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-config/a --log-file=osd-config/a/log --mon-cluster-log-file=osd-config/a/log --run-dir=osd-config/a --pid-file=osd-config/a/pidfile --public-addr 127.0.0.1 TEST_config_track: 55: run_osd osd-config 0 run_osd: 19: local dir=osd-config run_osd: 20: shift run_osd: 21: local id=0 run_osd: 22: shift run_osd: 23: local osd_data=osd-config/0 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=osd-config' run_osd: 27: ceph_disk_args+=' --sysconfdir=osd-config' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch osd-config/ceph.conf run_osd: 33: mkdir -p osd-config/0 run_osd: 34: ./ceph-disk --statedir=osd-config --sysconfdir=osd-config --prepend-to-path= --verbose prepare osd-config/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir osd-config/0 run_osd: 37: local 'ceph_args=--fsid=2e2c67fa-66fe-426e-b43e-3127e6d110ae --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=osd-config/0' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=osd-config' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=osd-config/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=osd-config/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p osd-config/0 run_osd: 49: CEPH_ARGS='--fsid=2e2c67fa-66fe-426e-b43e-3127e6d110ae --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=osd-config/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=osd-config --debug-osd=20 --log-file=osd-config/osd-$id.log --pid-file=osd-config/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=osd-config --sysconfdir=osd-config --prepend-to-path= --verbose activate --mark-init=none osd-config/0 DEBUG:ceph-disk:Cluster uuid is 2e2c67fa-66fe-426e-b43e-3127e6d110ae INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is cb859ccd-ec9c-4181-8c55-76187200649e DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-config/bootstrap-osd/ceph.keyring osd create --concise cb859ccd-ec9c-4181-8c55-76187200649e *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-config/bootstrap-osd/ceph.keyring mon getmap -o osd-config/0/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap osd-config/0/activate.monmap --osd-data osd-config/0 --osd-journal osd-config/0/journal --osd-uuid cb859ccd-ec9c-4181-8c55-76187200649e --keyring osd-config/0/keyring 2014-10-08 11:17:24.380197 2ba494b2bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:24.405964 2ba494b2bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:24.406504 2ba494b2bbc0 -1 filestore(osd-config/0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:17:24.435035 2ba494b2bbc0 -1 created object store osd-config/0 journal osd-config/0/journal for osd.0 fsid 2e2c67fa-66fe-426e-b43e-3127e6d110ae 2014-10-08 11:17:24.435117 2ba494b2bbc0 -1 auth: error reading file: osd-config/0/keyring: can't open osd-config/0/keyring: (2) No such file or directory 2014-10-08 11:17:24.435237 2ba494b2bbc0 -1 created new key in keyring osd-config/0/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-config/bootstrap-osd/ceph.keyring auth add osd.0 -i osd-config/0/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at osd-config/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=osd-config/0 --osd-journal=osd-config/0/journal starting osd.0 at :/0 osd_data osd-config/0 osd-config/0/journal rrun_osd: 54: cat osd-config/0/whoami run_osd: 54: '[' 0 = 0 ']' run_osd: 56: ./ceph osd crush create-or-move 0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 4 up_thru 0 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/24865 127.0.0.1:6801/24865 127.0.0.1:6802/24865 127.0.0.1:6803/24865 exists,up cb859ccd-ec9c-4181-8c55-76187200649e run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 TTEST_config_track: 58: CEPH_ARGS= TTEST_config_track: 58: ./ceph-conf --show-config-value osd_map_cache_size TEST_config_track: 58: local osd_map_cache_size=500 TTEST_config_track: 60: CEPH_ARGS= TTEST_config_track: 60: ./ceph-conf --show-config-value osd_map_max_advance TEST_config_track: 60: local osd_map_max_advance=200 TTEST_config_track: 62: CEPH_ARGS= TTEST_config_track: 62: ./ceph-conf --show-config-value osd_pg_epoch_persisted_max_stale TEST_config_track: 62: local osd_pg_epoch_persisted_max_stale=200 TEST_config_track: 66: grep 'is not > osd_map_max_advance' osd-config/osd-0.log TEST_config_track: 67: local cache=100 TEST_config_track: 68: ./ceph tell osd.0 injectargs '--osd-map-cache-size 100' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd_map_cache_size = '100' TEST_config_track: 69: CEPH_ARGS= TEST_config_track: 69: ./ceph --admin-daemon osd-config/ceph-osd.0.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_config_track: 70: grep 'is not > osd_map_max_advance' osd-config/osd-0.log 2014-10-08 11:17:25.932560 2ac453cc0700 0 log_channel(default) log [WRN] : osd_map_cache_size (100) is not > osd_map_max_advance (200) TEST_config_track: 71: rm osd-config/osd-0.log TEST_config_track: 72: CEPH_ARGS= TEST_config_track: 72: ./ceph --admin-daemon osd-config/ceph-osd.0.asok log reopen *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_config_track: 77: grep 'is not > osd_map_max_advance' osd-config/osd-0.log TEST_config_track: 78: local cache=500 TEST_config_track: 79: ./ceph tell osd.0 injectargs '--osd-map-cache-size 500' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd_map_cache_size = '500' TEST_config_track: 80: CEPH_ARGS= TEST_config_track: 80: ./ceph --admin-daemon osd-config/ceph-osd.0.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_config_track: 81: grep 'is not > osd_map_max_advance' osd-config/osd-0.log TEST_config_track: 86: grep 'is not > osd_map_max_advance' osd-config/osd-0.log TEST_config_track: 87: local advance=1000 TEST_config_track: 88: ./ceph tell osd.0 injectargs '--osd-map-max-advance 1000' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd_map_max_advance = '1000' TEST_config_track: 89: CEPH_ARGS= TEST_config_track: 89: ./ceph --admin-daemon osd-config/ceph-osd.0.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_config_track: 90: grep 'is not > osd_map_max_advance' osd-config/osd-0.log 2014-10-08 11:17:26.792452 2ac453cc0700 0 log_channel(default) log [WRN] : osd_map_cache_size (500) is not > osd_map_max_advance (1000) TEST_config_track: 95: grep 'is not > osd_pg_epoch_persisted_max_stale' osd-config/osd-0.log TEST_config_track: 96: local stale=1000 TEST_config_track: 97: ceph tell osd.0 injectargs '--osd-pg-epoch-persisted-max-stale 1000' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd_pg_epoch_persisted_max_stale = '1000' TEST_config_track: 98: CEPH_ARGS= TEST_config_track: 98: ./ceph --admin-daemon osd-config/ceph-osd.0.asok log flush *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {} TEST_config_track: 99: grep 'is not > osd_pg_epoch_persisted_max_stale' osd-config/osd-0.log 2014-10-08 11:17:27.313446 2ac453cc0700 0 log_channel(default) log [WRN] : osd_map_cache_size (500) is not > osd_pg_epoch_persisted_max_stale (1000) call_TEST_functions: 91: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-config/a/pidfile kill_daemons: 62: pid=24683 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24683 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24683 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24683 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-config/osd-0.pidfile kill_daemons: 62: pid=24867 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24867 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 24867 kill_daemons: 64: break teardown: 26: rm -fr osd-config main: 108: code=0 main: 112: teardown osd-config teardown: 24: local dir=osd-config teardown: 25: kill_daemons osd-config kill_daemons: 60: local dir=osd-config kkill_daemons: 59: find osd-config kkill_daemons: 59: grep pidfile find: `osd-config': No such file or directory teardown: 26: rm -fr osd-config main: 113: return 0 PASS: test/osd/osd-config.sh main: 105: setup osd-bench setup: 18: local dir=osd-bench setup: 19: teardown osd-bench teardown: 24: local dir=osd-bench teardown: 25: kill_daemons osd-bench kill_daemons: 60: local dir=osd-bench kkill_daemons: 59: find osd-bench kkill_daemons: 59: grep pidfile find: `osd-bench': No such file or directory teardown: 26: rm -fr osd-bench setup: 20: mkdir osd-bench main: 106: local code main: 107: run osd-bench run: 22: local dir=osd-bench run: 24: export CEPH_ARGS rrun: 25: uuidgen run: 25: CEPH_ARGS+='--fsid=d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d --auth-supported=none ' run: 26: CEPH_ARGS+='--mon-host=127.0.0.1 ' run: 28: local id=a run: 29: call_TEST_functions osd-bench a --public-addr 127.0.0.1 call_TEST_functions: 71: local dir=osd-bench call_TEST_functions: 72: shift call_TEST_functions: 73: local id=--public-addr call_TEST_functions: 74: shift call_TEST_functions: 76: setup osd-bench setup: 18: local dir=osd-bench setup: 19: teardown osd-bench teardown: 24: local dir=osd-bench teardown: 25: kill_daemons osd-bench kill_daemons: 60: local dir=osd-bench kkill_daemons: 59: find osd-bench kkill_daemons: 59: grep pidfile teardown: 26: rm -fr osd-bench setup: 20: mkdir osd-bench call_TEST_functions: 77: run_mon osd-bench --public-addr --public-addr 127.0.0.1 run_mon: 30: local dir=osd-bench run_mon: 31: shift run_mon: 32: local id=--public-addr run_mon: 33: shift run_mon: 34: dir+=/--public-addr run_mon: 37: ./ceph-mon --id --public-addr --mkfs --mon-data=osd-bench/--public-addr --run-dir=osd-bench/--public-addr --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.--public-addr ./ceph-mon: set fsid to d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d ./ceph-mon: created monfs at osd-bench/--public-addr for mon.--public-addr run_mon: 43: ./ceph-mon --id --public-addr --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-bench/--public-addr --log-file=osd-bench/--public-addr/log --mon-cluster-log-file=osd-bench/--public-addr/log --run-dir=osd-bench/--public-addr --pid-file=osd-bench/--public-addr/pidfile --public-addr 127.0.0.1 ccall_TEST_functions: 78: set ccall_TEST_functions: 78: sed -n -e 's/^\(SHARE_MON_TEST_[0-9a-z_]*\) .*/\1/p' call_TEST_functions: 78: SHARE_MON_FUNCTIONS= call_TEST_functions: 85: teardown osd-bench teardown: 24: local dir=osd-bench teardown: 25: kill_daemons osd-bench kill_daemons: 60: local dir=osd-bench kkill_daemons: 59: find osd-bench kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-bench/--public-addr/pidfile kill_daemons: 62: pid=25304 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25304 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25304 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25304 kill_daemons: 64: break teardown: 26: rm -fr osd-bench ccall_TEST_functions: 87: set ccall_TEST_functions: 87: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' call_TEST_functions: 87: FUNCTIONS=TEST_bench call_TEST_functions: 88: for TEST_function in '$FUNCTIONS' call_TEST_functions: 89: setup osd-bench setup: 18: local dir=osd-bench setup: 19: teardown osd-bench teardown: 24: local dir=osd-bench teardown: 25: kill_daemons osd-bench kill_daemons: 60: local dir=osd-bench kkill_daemons: 59: find osd-bench kkill_daemons: 59: grep pidfile find: `osd-bench': No such file or directory teardown: 26: rm -fr osd-bench setup: 20: mkdir osd-bench call_TEST_functions: 90: TEST_bench osd-bench TEST_bench: 33: local dir=osd-bench TEST_bench: 35: run_mon osd-bench a --public-addr 127.0.0.1 run_mon: 30: local dir=osd-bench run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=osd-bench/a --run-dir=osd-bench/a --public-addr 127.0.0.1 ./ceph-mon: renaming mon.noname-a 127.0.0.1:6789/0 to mon.a ./ceph-mon: set fsid to d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d ./ceph-mon: created monfs at osd-bench/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=osd-bench/a --log-file=osd-bench/a/log --mon-cluster-log-file=osd-bench/a/log --run-dir=osd-bench/a --pid-file=osd-bench/a/pidfile --public-addr 127.0.0.1 TEST_bench: 37: run_osd osd-bench 0 run_osd: 19: local dir=osd-bench run_osd: 20: shift run_osd: 21: local id=0 run_osd: 22: shift run_osd: 23: local osd_data=osd-bench/0 run_osd: 25: local ceph_disk_args run_osd: 26: ceph_disk_args+=' --statedir=osd-bench' run_osd: 27: ceph_disk_args+=' --sysconfdir=osd-bench' run_osd: 28: ceph_disk_args+=' --prepend-to-path=' run_osd: 29: ceph_disk_args+=' --verbose' run_osd: 31: touch osd-bench/ceph.conf run_osd: 33: mkdir -p osd-bench/0 run_osd: 34: ./ceph-disk --statedir=osd-bench --sysconfdir=osd-bench --prepend-to-path= --verbose prepare osd-bench/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir osd-bench/0 run_osd: 37: local 'ceph_args=--fsid=d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d --auth-supported=none --mon-host=127.0.0.1 ' run_osd: 38: ceph_args+=' --osd-journal-size=100' run_osd: 39: ceph_args+=' --osd-data=osd-bench/0' run_osd: 40: ceph_args+=' --chdir=' run_osd: 41: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs' run_osd: 42: ceph_args+=' --run-dir=osd-bench' run_osd: 43: ceph_args+=' --debug-osd=20' run_osd: 44: ceph_args+=' --log-file=osd-bench/osd-$id.log' run_osd: 45: ceph_args+=' --pid-file=osd-bench/osd-$id.pidfile' run_osd: 46: ceph_args+=' ' run_osd: 47: ceph_args+= run_osd: 48: mkdir -p osd-bench/0 run_osd: 49: CEPH_ARGS='--fsid=d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d --auth-supported=none --mon-host=127.0.0.1 --osd-journal-size=100 --osd-data=osd-bench/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=osd-bench --debug-osd=20 --log-file=osd-bench/osd-$id.log --pid-file=osd-bench/osd-$id.pidfile ' run_osd: 49: ./ceph-disk --statedir=osd-bench --sysconfdir=osd-bench --prepend-to-path= --verbose activate --mark-init=none osd-bench/0 DEBUG:ceph-disk:Cluster uuid is d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 54291df4-f1cb-4e3d-b0e8-ed10ad0da6f6 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-bench/bootstrap-osd/ceph.keyring osd create --concise 54291df4-f1cb-4e3d-b0e8-ed10ad0da6f6 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-bench/bootstrap-osd/ceph.keyring mon getmap -o osd-bench/0/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap osd-bench/0/activate.monmap --osd-data osd-bench/0 --osd-journal osd-bench/0/journal --osd-uuid 54291df4-f1cb-4e3d-b0e8-ed10ad0da6f6 --keyring osd-bench/0/keyring 2014-10-08 11:17:31.596859 2b8355272bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:32.339656 2b8355272bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:32.350855 2b8355272bc0 -1 filestore(osd-bench/0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:17:32.981329 2b8355272bc0 -1 created object store osd-bench/0 journal osd-bench/0/journal for osd.0 fsid d1f7d291-eedb-4ad1-9eff-0a4fc9f7505d 2014-10-08 11:17:32.981416 2b8355272bc0 -1 auth: error reading file: osd-bench/0/keyring: can't open osd-bench/0/keyring: (2) No such file or directory 2014-10-08 11:17:32.981567 2b8355272bc0 -1 created new key in keyring osd-bench/0/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring osd-bench/bootstrap-osd/ceph.keyring auth add osd.0 -i osd-bench/0/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at osd-bench/0 INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=osd-bench/0 --osd-journal=osd-bench/0/journal starting osd.0 at :/0 osd_data osd-bench/0 osd-bench/0/journal rrun_osd: 54: cat osd-bench/0/whoami run_osd: 54: '[' 0 = 0 ']' run_osd: 56: ./ceph osd crush create-or-move 0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map run_osd: 58: status=1 run_osd: 60: (( i=0 )) run_osd: 60: (( i < 60 )) run_osd: 61: ceph osd dump run_osd: 61: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 3 up_thru 0 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/25519 127.0.0.1:6801/25519 127.0.0.1:6802/25519 127.0.0.1:6803/25519 exists,up 54291df4-f1cb-4e3d-b0e8-ed10ad0da6f6 run_osd: 64: status=0 run_osd: 65: break run_osd: 69: return 0 TTEST_bench: 40: CEPH_ARGS= TTEST_bench: 40: ./ceph-conf --show-config-value osd_bench_small_size_max_iops TEST_bench: 40: local osd_bench_small_size_max_iops=100 TTEST_bench: 42: CEPH_ARGS= TTEST_bench: 42: ./ceph-conf --show-config-value osd_bench_large_size_max_throughput TEST_bench: 42: local osd_bench_large_size_max_throughput=104857600 TTEST_bench: 44: CEPH_ARGS= TTEST_bench: 44: ./ceph-conf --show-config-value osd_bench_max_block_size TEST_bench: 44: local osd_bench_max_block_size=67108864 TTEST_bench: 46: CEPH_ARGS= TTEST_bench: 46: ./ceph-conf --show-config-value osd_bench_duration TEST_bench: 46: local osd_bench_duration=30 TEST_bench: 51: ./ceph tell osd.0 bench 1024 67108865 TEST_bench: 52: grep osd_bench_max_block_size osd-bench/out Error EINVAL: block 'size' values are capped at 65536 kB. If you wish to use a higher value, please adjust 'osd_bench_max_block_size' TEST_bench: 57: local bsize=1024 TEST_bench: 58: local max_count=3072000 TEST_bench: 59: ./ceph tell osd.0 bench 3072001 1024 TEST_bench: 60: grep osd_bench_small_size_max_iops osd-bench/out Error EINVAL: 'count' values greater than 3072000 for a block size of 1024 bytes, assuming 100 IOPS, for 30 seconds, can cause ill effects on osd. Please adjust 'osd_bench_small_size_max_iops' with a higher value if you wish to use a higher 'count'. TEST_bench: 65: local bsize=1048577 TEST_bench: 66: local max_count=3145728000 TEST_bench: 67: ./ceph tell osd.0 bench 3145728001 1048577 TEST_bench: 68: grep osd_bench_large_size_max_throughput osd-bench/out Error EINVAL: 'count' values greater than 3145728000 for a block size of 1024 kB, assuming 102400 kB/s, for 30 seconds, can cause ill effects on osd. Please adjust 'osd_bench_large_size_max_throughput' with a higher value if you wish to use a higher 'count'. TEST_bench: 73: ./ceph tell osd.0 bench *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "bytes_written": 1073741824, "blocksize": 4194304, "bytes_per_sec": "56307554.000000"} call_TEST_functions: 91: teardown osd-bench teardown: 24: local dir=osd-bench teardown: 25: kill_daemons osd-bench kill_daemons: 60: local dir=osd-bench kkill_daemons: 59: find osd-bench kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-bench/a/pidfile kill_daemons: 62: pid=25337 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25337 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25337 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat osd-bench/osd-0.pidfile kill_daemons: 62: pid=25521 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25521 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25521 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 25521 kill_daemons: 64: break teardown: 26: rm -fr osd-bench main: 108: code=0 main: 112: teardown osd-bench teardown: 24: local dir=osd-bench teardown: 25: kill_daemons osd-bench kill_daemons: 60: local dir=osd-bench kkill_daemons: 59: find osd-bench kkill_daemons: 59: grep pidfile find: `osd-bench': No such file or directory teardown: 26: rm -fr osd-bench main: 113: return 0 PASS: test/osd/osd-bench.sh + PS4='${FUNCNAME[0]}: $LINENO: ' : 20: export PATH=:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games : 20: PATH=:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games : 21: DIR=test-ceph-disk : 22: MON_ID=a : 23: MONA=127.0.0.1:7451 : 24: TEST_POOL=rbd :: 25: uuidgen : 25: FSID=2bc5bc92-b9d1-493f-98de-3b1e089c679f : 26: export CEPH_CONF=/dev/null : 26: CEPH_CONF=/dev/null : 27: export 'CEPH_ARGS=--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f' : 27: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f' : 28: CEPH_ARGS+=' --chdir=' : 29: CEPH_ARGS+=' --run-dir=test-ceph-disk' : 30: CEPH_ARGS+=' --mon-host=127.0.0.1:7451' : 31: CEPH_ARGS+=' --log-file=test-ceph-disk/$name.log' : 32: CEPH_ARGS+=' --pid-file=test-ceph-disk/$name.pidfile' : 33: CEPH_ARGS+=' --osd-pool-default-erasure-code-directory=.libs' : 34: CEPH_ARGS+=' --auth-supported=none' : 35: CEPH_DISK_ARGS= : 36: CEPH_DISK_ARGS+=' --statedir=test-ceph-disk' : 37: CEPH_DISK_ARGS+=' --sysconfdir=test-ceph-disk' : 38: CEPH_DISK_ARGS+=' --prepend-to-path=' : 39: CEPH_DISK_ARGS+=' --verbose' : 40: TIMEOUT=360 :: 42: which cat : 42: cat=/bin/cat :: 43: which timeout : 43: timeout=/usr/bin/timeout :: 44: which diff : 44: diff=/usr/bin/diff : 244: run run: 228: local default_actions run: 229: default_actions+='test_path ' run: 230: default_actions+='test_no_path ' run: 231: default_actions+='test_find_cluster_by_uuid ' run: 232: default_actions+='test_prepend_to_path ' run: 233: default_actions+='test_activate_dir_magic ' run: 234: default_actions+='test_activate_dir ' run: 235: default_actions+='test_keyring_path ' run: 236: local 'actions=test_path test_no_path test_find_cluster_by_uuid test_prepend_to_path test_activate_dir_magic test_activate_dir test_keyring_path ' run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_path test_path: 146: tweak_path use_path tweak_path: 99: local tweaker=use_path tweak_path: 101: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf tweak_path: 103: command_fixture ceph-conf command_fixture: 86: local command=ceph-conf ccommand_fixture: 88: which ceph-conf command_fixture: 88: '[' ./ceph-conf = ./ceph-conf ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-conf tweak_path: 104: command_fixture ceph-osd command_fixture: 86: local command=ceph-osd ccommand_fixture: 88: which ceph-osd command_fixture: 88: '[' ./ceph-osd = ./ceph-osd ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-osd tweak_path: 106: test_activate_dir test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 64e46d4e-f655-4a08-9681-61319e2594ab DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise 64e46d4e-f655-4a08-9681-61319e2594ab *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid 64e46d4e-f655-4a08-9681-61319e2594ab --keyring test-ceph-disk/osd/keyring 2014-10-08 11:17:57.096546 2b59ad21bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:57.138970 2b59ad21bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:17:57.139461 2b59ad21bbc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:17:57.166601 2b59ad21bbc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:17:57.166674 2b59ad21bbc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:17:57.166806 2b59ad21bbc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy tweak_path: 108: '[' '!' -f test-ceph-disk/used-ceph-conf ']' tweak_path: 109: '[' '!' -f test-ceph-disk/used-ceph-osd ']' tweak_path: 111: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=25919 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 25919 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 25919 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 25919 ./test/ceph-disk.sh: line 79: kill: (25919) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=26102 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26102 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26102 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26102 ./test/ceph-disk.sh: line 79: kill: (26102) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk tweak_path: 113: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf tweak_path: 115: command_fixture ceph-conf command_fixture: 86: local command=ceph-conf ccommand_fixture: 88: which ceph-conf command_fixture: 88: '[' ./ceph-conf = ./ceph-conf ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-conf tweak_path: 116: command_fixture ceph-osd command_fixture: 86: local command=ceph-osd ccommand_fixture: 88: which ceph-osd command_fixture: 88: '[' ./ceph-osd = ./ceph-osd ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-osd tweak_path: 118: use_path test_activate_dir use_path: 141: PATH=test-ceph-disk::/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games use_path: 141: test_activate_dir test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is d15142f0-3813-4ce7-bc0d-0356bf1db02b DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise d15142f0-3813-4ce7-bc0d-0356bf1db02b *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid d15142f0-3813-4ce7-bc0d-0356bf1db02b --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:04.044316 2b0b5a2c5bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:04.078869 2b0b5a2c5bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:04.079306 2b0b5a2c5bc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:04.106313 2b0b5a2c5bc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:04.106418 2b0b5a2c5bc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:04.106551 2b0b5a2c5bc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy tweak_path: 120: '[' -f test-ceph-disk/used-ceph-conf ']' tweak_path: 121: '[' -f test-ceph-disk/used-ceph-osd ']' tweak_path: 123: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=26360 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26360 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26360 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26360 ./test/ceph-disk.sh: line 79: kill: (26360) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=26554 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26554 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26554 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26554 ./test/ceph-disk.sh: line 79: kill: (26554) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_no_path test_no_path: 150: unset PATH test_no_path: 150: test_activate_dir test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is a7ebf405-87dd-4c9a-8f12-95cc52975868 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise a7ebf405-87dd-4c9a-8f12-95cc52975868 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid a7ebf405-87dd-4c9a-8f12-95cc52975868 --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:11.214611 2af721a0bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:11.330609 2af721a0bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:11.331207 2af721a0bbc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:11.356324 2af721a0bbc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:11.356410 2af721a0bbc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:11.356561 2af721a0bbc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=26811 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26811 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26811 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26811 ./test/ceph-disk.sh: line 79: kill: (26811) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=26988 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26988 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26988 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 26988 ./test/ceph-disk.sh: line 79: kill: (26988) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_find_cluster_by_uuid test_find_cluster_by_uuid: 209: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf test_find_cluster_by_uuid: 210: test_activate_dir test_find_cluster_by_uuid: 210: tee test-ceph-disk/test_find test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 9c5390c8-cbf2-440e-9aeb-e621dd5245cd DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise 9c5390c8-cbf2-440e-9aeb-e621dd5245cd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid 9c5390c8-cbf2-440e-9aeb-e621dd5245cd --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:18.057910 2b4c4a038bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:18.087104 2b4c4a038bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:18.087580 2b4c4a038bc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:18.116945 2b4c4a038bc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:18.117017 2b4c4a038bc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:18.117127 2b4c4a038bc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy test_find_cluster_by_uuid: 211: grep 'No cluster conf found in test-ceph-disk' test-ceph-disk/test_find test_find_cluster_by_uuid: 212: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=27244 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27244 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27244 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27244 ./test/ceph-disk.sh: line 79: kill: (27244) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=27427 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27427 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27427 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27427 ./test/ceph-disk.sh: line 79: kill: (27427) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk test_find_cluster_by_uuid: 214: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf test_find_cluster_by_uuid: 215: rm test-ceph-disk/ceph.conf test_find_cluster_by_uuid: 216: test_activate_dir test_find_cluster_by_uuid: 217: grep --quiet 'No cluster conf found in test-ceph-disk' test-ceph-disk/test_find test_find_cluster_by_uuid: 218: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=27682 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27682 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27682 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27682 ./test/ceph-disk.sh: line 79: kill: (27682) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_prepend_to_path test_prepend_to_path: 137: tweak_path use_prepend_to_path tweak_path: 99: local tweaker=use_prepend_to_path tweak_path: 101: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf tweak_path: 103: command_fixture ceph-conf command_fixture: 86: local command=ceph-conf ccommand_fixture: 88: which ceph-conf command_fixture: 88: '[' ./ceph-conf = ./ceph-conf ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-conf tweak_path: 104: command_fixture ceph-osd command_fixture: 86: local command=ceph-osd ccommand_fixture: 88: which ceph-osd command_fixture: 88: '[' ./ceph-osd = ./ceph-osd ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-osd tweak_path: 106: test_activate_dir test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 547f6d5a-af1f-4323-b8a8-a26259a38702 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise 547f6d5a-af1f-4323-b8a8-a26259a38702 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid 547f6d5a-af1f-4323-b8a8-a26259a38702 --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:26.266447 2ab2c0d6abc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:27.010686 2ab2c0d6abc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:27.011307 2ab2c0d6abc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:27.183495 2ab2c0d6abc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:27.183571 2ab2c0d6abc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:27.183712 2ab2c0d6abc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy tweak_path: 108: '[' '!' -f test-ceph-disk/used-ceph-conf ']' tweak_path: 109: '[' '!' -f test-ceph-disk/used-ceph-osd ']' tweak_path: 111: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=27756 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27756 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27756 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27756 ./test/ceph-disk.sh: line 79: kill: (27756) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=27939 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27939 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27939 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 27939 ./test/ceph-disk.sh: line 79: kill: (27939) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk tweak_path: 113: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf tweak_path: 115: command_fixture ceph-conf command_fixture: 86: local command=ceph-conf ccommand_fixture: 88: which ceph-conf command_fixture: 88: '[' ./ceph-conf = ./ceph-conf ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-conf tweak_path: 116: command_fixture ceph-osd command_fixture: 86: local command=ceph-osd ccommand_fixture: 88: which ceph-osd command_fixture: 88: '[' ./ceph-osd = ./ceph-osd ']' command_fixture: 90: cat command_fixture: 95: chmod +x test-ceph-disk/ceph-osd tweak_path: 118: use_prepend_to_path test_activate_dir use_prepend_to_path: 127: local ceph_disk_args use_prepend_to_path: 128: ceph_disk_args+=' --statedir=test-ceph-disk' use_prepend_to_path: 129: ceph_disk_args+=' --sysconfdir=test-ceph-disk' use_prepend_to_path: 130: ceph_disk_args+=' --prepend-to-path=test-ceph-disk' use_prepend_to_path: 131: ceph_disk_args+=' --verbose' use_prepend_to_path: 132: CEPH_DISK_ARGS=' --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path=test-ceph-disk --verbose' use_prepend_to_path: 132: test_activate_dir test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path=test-ceph-disk --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path=test-ceph-disk --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 4eb059be-594b-4a4b-97ad-ad9f3cb76522 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise 4eb059be-594b-4a4b-97ad-ad9f3cb76522 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid 4eb059be-594b-4a4b-97ad-ad9f3cb76522 --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:34.668083 2b856aab8bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:34.699904 2b856aab8bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:34.700315 2b856aab8bc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:34.756260 2b856aab8bc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:34.756325 2b856aab8bc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:34.756433 2b856aab8bc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: test-ceph-disk/ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy tweak_path: 120: '[' -f test-ceph-disk/used-ceph-conf ']' tweak_path: 121: '[' -f test-ceph-disk/used-ceph-osd ']' tweak_path: 123: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=28197 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28197 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28197 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28197 ./test/ceph-disk.sh: line 79: kill: (28197) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=28391 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28391 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28391 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28391 ./test/ceph-disk.sh: line 79: kill: (28391) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_activate_dir_magic ttest_activate_dir_magic: 156: uuidgen test_activate_dir_magic: 156: local uuid=cacab642-7501-4c17-91a5-0ea490b159b7 test_activate_dir_magic: 157: local osd_data=test-ceph-disk/osd test_activate_dir_magic: 159: echo a failure to create the fsid file implies the magic file is not created a failure to create the fsid file implies the magic file is not created test_activate_dir_magic: 161: mkdir -p test-ceph-disk/osd/fsid test_activate_dir_magic: 162: CEPH_ARGS='--fsid cacab642-7501-4c17-91a5-0ea490b159b7' test_activate_dir_magic: 162: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd test_activate_dir_magic: 164: grep --quiet 'Is a directory' test-ceph-disk/out test_activate_dir_magic: 165: '[' -f test-ceph-disk/osd/magic ']' test_activate_dir_magic: 166: rmdir test-ceph-disk/osd/fsid test_activate_dir_magic: 168: echo successfully prepare the OSD successfully prepare the OSD test_activate_dir_magic: 171: tee test-ceph-disk/out test_activate_dir_magic: 170: CEPH_ARGS='--fsid cacab642-7501-4c17-91a5-0ea490b159b7' test_activate_dir_magic: 170: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir_magic: 172: grep --quiet 'Preparing osd data dir' test-ceph-disk/out test_activate_dir_magic: 173: grep --quiet cacab642-7501-4c17-91a5-0ea490b159b7 test-ceph-disk/osd/ceph_fsid test_activate_dir_magic: 174: '[' -f test-ceph-disk/osd/magic ']' test_activate_dir_magic: 176: echo will not override an existing OSD will not override an existing OSD test_activate_dir_magic: 179: tee test-ceph-disk/out ttest_activate_dir_magic: 178: uuidgen test_activate_dir_magic: 178: CEPH_ARGS='--fsid 5be1652e-3628-4e71-ac35-1b34c8b6fdee' test_activate_dir_magic: 178: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Data dir test-ceph-disk/osd already exists test_activate_dir_magic: 180: grep --quiet 'ceph-disk:Data dir .* already exists' test-ceph-disk/out test_activate_dir_magic: 181: grep --quiet cacab642-7501-4c17-91a5-0ea490b159b7 test-ceph-disk/osd/ceph_fsid run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile teardown: 54: rm -fr test-ceph-disk run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_activate_dir test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is e7376f6f-c8ba-4e5d-bd97-dd2183bf9081 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise e7376f6f-c8ba-4e5d-bd97-dd2183bf9081 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid e7376f6f-c8ba-4e5d-bd97-dd2183bf9081 --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:42.053416 2abe9b495bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:42.090900 2abe9b495bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:42.091433 2abe9b495bc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:42.431915 2abe9b495bc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:42.431994 2abe9b495bc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:42.432144 2abe9b495bc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=28731 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28731 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28731 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28731 ./test/ceph-disk.sh: line 79: kill: (28731) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=28914 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28914 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28914 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 28914 ./test/ceph-disk.sh: line 79: kill: (28914) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk run: 237: for action in '$actions' run: 238: setup setup: 47: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile find: `test-ceph-disk': No such file or directory teardown: 54: rm -fr test-ceph-disk setup: 48: mkdir test-ceph-disk setup: 49: touch test-ceph-disk/ceph.conf run: 239: test_keyring_path test_keyring_path: 223: test_activate_dir test_keyring_path: 223: tee test-ceph-disk/test_keyring test_activate_dir: 185: run_mon run_mon: 58: local mon_dir=test-ceph-disk/a run_mon: 61: ./ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a ./ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a ./ceph-mon: set fsid to 2bc5bc92-b9d1-493f-98de-3b1e089c679f ./ceph-mon: created monfs at test-ceph-disk/a for mon.a run_mon: 68: ./ceph-mon --id a --mon-data=test-ceph-disk/a --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451 test_activate_dir: 187: local osd_data=test-ceph-disk/osd test_activate_dir: 189: /bin/mkdir -p test-ceph-disk/osd test_activate_dir: 190: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size DEBUG:ceph-disk:Preparing osd data dir test-ceph-disk/osd test_activate_dir: 193: CEPH_ARGS='--fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f --chdir= --run-dir=test-ceph-disk --mon-host=127.0.0.1:7451 --log-file=test-ceph-disk/$name.log --pid-file=test-ceph-disk/$name.pidfile --osd-pool-default-erasure-code-directory=.libs --auth-supported=none --osd-journal-size=100 --osd-data=test-ceph-disk/osd' test_activate_dir: 193: /usr/bin/timeout 360 ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/osd DEBUG:ceph-disk:Cluster uuid is 2bc5bc92-b9d1-493f-98de-3b1e089c679f INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid DEBUG:ceph-disk:Cluster name is ceph DEBUG:ceph-disk:OSD uuid is 202396ba-c3da-4618-a480-1ebbaa9d13c1 DEBUG:ceph-disk:Allocating OSD id... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise 202396ba-c3da-4618-a480-1ebbaa9d13c1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** DEBUG:ceph-disk:OSD id is 0 DEBUG:ceph-disk:Initializing OSD... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/osd/activate.monmap *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/osd/activate.monmap --osd-data test-ceph-disk/osd --osd-journal test-ceph-disk/osd/journal --osd-uuid 202396ba-c3da-4618-a480-1ebbaa9d13c1 --keyring test-ceph-disk/osd/keyring 2014-10-08 11:18:49.487627 2b1ec59b1bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:49.526202 2b1ec59b1bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:18:49.526877 2b1ec59b1bc0 -1 filestore(test-ceph-disk/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:18:49.553914 2b1ec59b1bc0 -1 created object store test-ceph-disk/osd journal test-ceph-disk/osd/journal for osd.0 fsid 2bc5bc92-b9d1-493f-98de-3b1e089c679f 2014-10-08 11:18:49.554004 2b1ec59b1bc0 -1 auth: error reading file: test-ceph-disk/osd/keyring: can't open test-ceph-disk/osd/keyring: (2) No such file or directory 2014-10-08 11:18:49.554145 2b1ec59b1bc0 -1 created new key in keyring test-ceph-disk/osd/keyring DEBUG:ceph-disk:Authorizing OSD key... INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/osd/keyring osd allow * mon allow profile osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-ceph-disk/osd INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/osd --osd-journal=test-ceph-disk/osd/journal starting osd.0 at :/0 osd_data test-ceph-disk/osd test-ceph-disk/osd/journal test_activate_dir: 198: /usr/bin/timeout 360 ./ceph osd pool set rbd size 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 0 size to 1 ttest_activate_dir: 199: /bin/cat test-ceph-disk/osd/whoami test_activate_dir: 199: local id=0 test_activate_dir: 200: local weight=1 test_activate_dir: 201: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate_dir: 202: echo FOO test_activate_dir: 203: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate_dir: 204: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate_dir: 205: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy test_keyring_path: 224: grep --quiet 'keyring test-ceph-disk/bootstrap-osd/ceph.keyring' test-ceph-disk/test_keyring run: 240: teardown teardown: 53: kill_daemons kkill_daemons: 75: find test-ceph-disk kkill_daemons: 75: grep pidfile kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/mon.a.pidfile kill_daemons: 77: pid=29169 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 29169 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 29169 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 29169 ./test/ceph-disk.sh: line 79: kill: (29169) - No such process kill_daemons: 79: break kill_daemons: 76: for pidfile in '$(find $DIR | grep pidfile)' kkill_daemons: 77: cat test-ceph-disk/osd.0.pidfile kill_daemons: 77: pid=29355 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 29355 kill_daemons: 80: sleep 0 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 29355 kill_daemons: 80: sleep 1 kill_daemons: 78: for try in 0 1 1 1 2 3 kill_daemons: 79: kill 29355 ./test/ceph-disk.sh: line 79: kill: (29355) - No such process kill_daemons: 79: break teardown: 54: rm -fr test-ceph-disk PASS: test/ceph-disk.sh main: 105: setup mon-handle-forward setup: 18: local dir=mon-handle-forward setup: 19: teardown mon-handle-forward teardown: 24: local dir=mon-handle-forward teardown: 25: kill_daemons mon-handle-forward kill_daemons: 60: local dir=mon-handle-forward kkill_daemons: 59: find mon-handle-forward kkill_daemons: 59: grep pidfile find: `mon-handle-forward': No such file or directory teardown: 26: rm -fr mon-handle-forward setup: 20: mkdir mon-handle-forward main: 106: local code main: 107: run mon-handle-forward run: 20: local dir=mon-handle-forward run: 22: PORT=7451 run: 23: MONA=127.0.0.1:7451 run: 24: MONB=127.0.0.1:7452 rrun: 26: uuidgen run: 26: FSID=b546b251-c068-4be9-b1cd-94de6d22091d run: 27: export CEPH_ARGS run: 28: CEPH_ARGS+='--fsid=b546b251-c068-4be9-b1cd-94de6d22091d --auth-supported=none ' run: 29: CEPH_ARGS+='--mon-initial-members=a,b --mon-host=127.0.0.1:7451,127.0.0.1:7452 ' run: 30: run_mon mon-handle-forward a --public-addr 127.0.0.1:7451 run_mon: 30: local dir=mon-handle-forward run_mon: 31: shift run_mon: 32: local id=a run_mon: 33: shift run_mon: 34: dir+=/a run_mon: 37: ./ceph-mon --id a --mkfs --mon-data=mon-handle-forward/a --run-dir=mon-handle-forward/a --public-addr 127.0.0.1:7451 ./ceph-mon: renaming mon.noname-a 127.0.0.1:7451/0 to mon.a ./ceph-mon: set fsid to b546b251-c068-4be9-b1cd-94de6d22091d ./ceph-mon: created monfs at mon-handle-forward/a for mon.a run_mon: 43: ./ceph-mon --id a --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=mon-handle-forward/a --log-file=mon-handle-forward/a/log --mon-cluster-log-file=mon-handle-forward/a/log --run-dir=mon-handle-forward/a --pid-file=mon-handle-forward/a/pidfile --public-addr 127.0.0.1:7451 run: 31: run_mon mon-handle-forward b --public-addr 127.0.0.1:7452 run_mon: 30: local dir=mon-handle-forward run_mon: 31: shift run_mon: 32: local id=b run_mon: 33: shift run_mon: 34: dir+=/b run_mon: 37: ./ceph-mon --id b --mkfs --mon-data=mon-handle-forward/b --run-dir=mon-handle-forward/b --public-addr 127.0.0.1:7452 ./ceph-mon: renaming mon.noname-b 127.0.0.1:7452/0 to mon.b ./ceph-mon: set fsid to b546b251-c068-4be9-b1cd-94de6d22091d ./ceph-mon: created monfs at mon-handle-forward/b for mon.b run_mon: 43: ./ceph-mon --id b --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=mon-handle-forward/b --log-file=mon-handle-forward/b/log --mon-cluster-log-file=mon-handle-forward/b/log --run-dir=mon-handle-forward/b --pid-file=mon-handle-forward/b/pidfile --public-addr 127.0.0.1:7452 run: 34: timeout 10 ./ceph --mon-host 127.0.0.1:7451 mon stat *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** e1: 2 mons at {a=127.0.0.1:7451/0,b=127.0.0.1:7452/0}, election epoch 4, quorum 0,1 a,b run: 36: ./ceph --admin-daemon mon-handle-forward/b/ceph-mon.b.asok mon_status run: 37: grep '"peon"' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "state": "peon", run: 39: ./ceph --mon-host 127.0.0.1:7451 osd pool create POOL1 12 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'POOL1' created run: 40: grep 'mon_command(.*"POOL1"' mon-handle-forward/a/log 2014-10-08 11:18:58.083149 2b2f8918e700 10 -- 127.0.0.1:7451/0 >> 127.0.0.1:0/1029706 pipe(0x4b0b000 sd=24 :7451 s=2 pgs=1 cs=1 l=1 c=0x4b029a0).reader got message 6 0x4bee800 mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL1"} v 0) v1 2014-10-08 11:18:58.083215 2b2f88888700 1 -- 127.0.0.1:7451/0 <== client.4100 127.0.0.1:0/1029706 6 ==== mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL1"} v 0) v1 ==== 102+0+0 (2696259860 0 0) 0x4bee800 con 0x4b029a0 2014-10-08 11:18:58.083513 2b2f88888700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL1"} v 0) v1 2014-10-08 11:18:58.083694 2b2f88888700 10 mon.a@0(leader).paxosservice(osdmap 1..1) dispatch mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL1"} v 0) v1 from client.4100 127.0.0.1:0/1029706 2014-10-08 11:18:58.083785 2b2f88888700 10 mon.a@0(leader).osd e1 preprocess_query mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL1"} v 0) v1 from client.4100 127.0.0.1:0/1029706 2014-10-08 11:18:58.083970 2b2f88888700 7 mon.a@0(leader).osd e1 prepare_update mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL1"} v 0) v1 from client.4100 127.0.0.1:0/1029706 run: 41: grep 'mon_command(.*"POOL1"' mon-handle-forward/b/log run: 43: ./ceph --mon-host 127.0.0.1:7452 osd pool create POOL2 12 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'POOL2' created run: 44: grep 'forward_request.*mon_command(.*"POOL2"' mon-handle-forward/b/log 2014-10-08 11:18:58.350868 2b148510e700 10 mon.b@1(peon) e1 forward_request 3 request mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL2"} v 0) v1 run: 45: grep ' forward(mon_command(.*"POOL2"' mon-handle-forward/a/log 2014-10-08 11:18:58.351460 2b2f8928f700 10 -- 127.0.0.1:7451/0 >> 127.0.0.1:7452/0 pipe(0x4ad1500 sd=14 :42602 s=2 pgs=6 cs=1 l=0 c=0x49e6b00).reader got message 47 0x4bb7c80 forward(mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL2"} v 0) v1 caps allow * tid 3 con_features 35184372088831) to leader v2 2014-10-08 11:18:58.351517 2b2f88888700 1 -- 127.0.0.1:7451/0 <== mon.1 127.0.0.1:7452/0 47 ==== forward(mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL2"} v 0) v1 caps allow * tid 3 con_features 35184372088831) to leader v2 ==== 358+0+0 (1592461472 0 0) 0x4bb7c80 con 0x49e6b00 rrun: 48: sed -n -e 's|.*127.0.0.1:0.*accept features \([0-9][0-9]*\)|\1|p' run: 48: features=35184372088831 run: 49: grep ' forward(mon_command(.*"POOL2".*con_features 35184372088831' mon-handle-forward/a/log 2014-10-08 11:18:58.351460 2b2f8928f700 10 -- 127.0.0.1:7451/0 >> 127.0.0.1:7452/0 pipe(0x4ad1500 sd=14 :42602 s=2 pgs=6 cs=1 l=0 c=0x49e6b00).reader got message 47 0x4bb7c80 forward(mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL2"} v 0) v1 caps allow * tid 3 con_features 35184372088831) to leader v2 2014-10-08 11:18:58.351517 2b2f88888700 1 -- 127.0.0.1:7451/0 <== mon.1 127.0.0.1:7452/0 47 ==== forward(mon_command({"prefix": "osd pool create", "pg_num": 12, "pool": "POOL2"} v 0) v1 caps allow * tid 3 con_features 35184372088831) to leader v2 ==== 358+0+0 (1592461472 0 0) 0x4bb7c80 con 0x49e6b00 main: 108: code=0 main: 112: teardown mon-handle-forward teardown: 24: local dir=mon-handle-forward teardown: 25: kill_daemons mon-handle-forward kill_daemons: 60: local dir=mon-handle-forward kkill_daemons: 59: find mon-handle-forward kkill_daemons: 59: grep pidfile kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat mon-handle-forward/a/pidfile kill_daemons: 62: pid=29614 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 29614 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 29614 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 29614 kill_daemons: 64: break kill_daemons: 61: for pidfile in '$(find $dir | grep pidfile)' kkill_daemons: 62: cat mon-handle-forward/b/pidfile kill_daemons: 62: pid=29632 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 29632 kill_daemons: 65: sleep 0 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 29632 kill_daemons: 65: sleep 1 kill_daemons: 63: for try in 0 1 1 1 2 3 kill_daemons: 64: kill -9 29632 kill_daemons: 64: break teardown: 26: rm -fr mon-handle-forward main: 113: return 0 PASS: test/mon/mon-handle-forward.sh Run unit tests that need a cluster, using vstart.sh ================ START ================ ../qa/workunits/cephtool/test.sh --asok-does-not-need-root ======================================= ip 127.0.0.1 NOTE: hostname resolves to loopback; remote hosts will not be able to connect. either adjust /etc/hosts, or edit this script to use your machine's real IP. creating /srv/autobuild-ceph/gitbuilder.git/build/src//keyring ./monmaptool --create --clobber --add a 127.0.0.1:6789 --print /tmp/ceph_monmap.29812 ./monmaptool: monmap file /tmp/ceph_monmap.29812 ./monmaptool: generated fsid e01be220-a772-4c8c-833d-6c5f29e2eed2 epoch 0 fsid e01be220-a772-4c8c-833d-6c5f29e2eed2 last_changed 2014-10-08 11:19:00.829984 created 2014-10-08 11:19:00.829984 0: 127.0.0.1:6789/0 mon.a ./monmaptool: writing epoch 0 to /tmp/ceph_monmap.29812 (1 monitors) rm -rf /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/mon.a mkdir -p /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/mon.a ./ceph-mon --mkfs -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf -i a --monmap=/tmp/ceph_monmap.29812 --keyring=/srv/autobuild-ceph/gitbuilder.git/build/src//keyring ./ceph-mon: set fsid to e12b5778-21f1-4b32-9b0e-f5f1d48eeafe ./ceph-mon: created monfs at /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/mon.a for mon.a ./ceph-mon -i a -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf ./vstart.sh: 482: ./vstart.sh: btrfs: not found add osd0 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=gitbuilder-ceph-tarball-precise-amd64-basic,root=default} to crush map 2014-10-08 11:19:01.624912 2b1907823bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:19:01.651452 2b1907823bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:19:01.651974 2b1907823bc0 -1 filestore(/srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:19:01.682059 2b1907823bc0 -1 created object store /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0 journal /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0.journal for osd.0 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe 2014-10-08 11:19:01.682137 2b1907823bc0 -1 auth: error reading file: /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0/keyring: can't open /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0/keyring: (2) No such file or directory 2014-10-08 11:19:01.682263 2b1907823bc0 -1 created new key in keyring /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0/keyring adding osd0 key to auth repository *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.0 start osd0 ./ceph-osd -i 0 -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf starting osd.0 at :/0 osd_data /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0 /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd0.journal ./vstart.sh: 482: ./vstart.sh: btrfs: not found add osd1 d1c362f0-26d3-43c3-842a-16ba0f1ccfbf *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 1 name 'osd.1' weight 1 at location {host=gitbuilder-ceph-tarball-precise-amd64-basic,root=default} to crush map 2014-10-08 11:19:02.869862 2b867900bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:19:02.934755 2b867900bbc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:19:02.935265 2b867900bbc0 -1 filestore(/srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:19:02.966617 2b867900bbc0 -1 created object store /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1 journal /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1.journal for osd.1 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe 2014-10-08 11:19:02.966689 2b867900bbc0 -1 auth: error reading file: /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1/keyring: can't open /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1/keyring: (2) No such file or directory 2014-10-08 11:19:02.966826 2b867900bbc0 -1 created new key in keyring /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1/keyring adding osd1 key to auth repository *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.1 start osd1 ./ceph-osd -i 1 -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf starting osd.1 at :/0 osd_data /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1 /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd1.journal ./vstart.sh: 482: ./vstart.sh: btrfs: not found add osd2 0c1200ff-b9ab-4829-b942-4112257d80f1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 2 name 'osd.2' weight 1 at location {host=gitbuilder-ceph-tarball-precise-amd64-basic,root=default} to crush map 2014-10-08 11:19:04.012525 2b8b1f127bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:19:04.036329 2b8b1f127bc0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2014-10-08 11:19:04.036835 2b8b1f127bc0 -1 filestore(/srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2014-10-08 11:19:04.066479 2b8b1f127bc0 -1 created object store /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2 journal /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2.journal for osd.2 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe 2014-10-08 11:19:04.066538 2b8b1f127bc0 -1 auth: error reading file: /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2/keyring: can't open /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2/keyring: (2) No such file or directory 2014-10-08 11:19:04.066634 2b8b1f127bc0 -1 created new key in keyring /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2/keyring adding osd2 key to auth repository *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for osd.2 start osd2 ./ceph-osd -i 2 -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf starting osd.2 at :/0 osd_data /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2 /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/osd2.journal creating /srv/autobuild-ceph/gitbuilder.git/build/src//test_dev/mds.a/keyring *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for mds.a ./ceph -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf -k /srv/autobuild-ceph/gitbuilder.git/build/src//keyring osd pool create cephfs_data 8 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cephfs_data' created ./ceph -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf -k /srv/autobuild-ceph/gitbuilder.git/build/src//keyring osd pool create cephfs_metadata 8 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cephfs_metadata' created ./ceph -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf -k /srv/autobuild-ceph/gitbuilder.git/build/src//keyring fs new cephfs cephfs_metadata cephfs_data *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** new fs with metadata pool 2 and data pool 1 ./ceph-mds -i a -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf starting mds.a at :/0 ./ceph -c /srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf -k /srv/autobuild-ceph/gitbuilder.git/build/src//keyring mds set max_mds 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output. export PYTHONPATH=./pybind export LD_LIBRARY_PATH=.libs export CEPH_CONF=/srv/autobuild-ceph/gitbuilder.git/build/src//ceph.conf export CEPH_KEYRING=/srv/autobuild-ceph/gitbuilder.git/build/src//keyring + set -e + set -o functrace + PS4=' ${FUNCNAME[0]}: $LINENO: ' : 6: SUDO=sudo : 49: TMPDIR=/tmp/cephtool30851 : 50: mkdir /tmp/cephtool30851 : 51: trap 'rm -fr /tmp/cephtool30851' 0 : 53: TMPFILE=/tmp/cephtool30851/test_invalid.30851 : 1250: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_mon_injectargs_SI test_mon_injectargs_SI: 167: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "10000"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"10000"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:10000 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:10000 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=10000 get_config_value_or_die: 140: echo 10000 get_config_value_or_die: 141: return 0 test_mon_injectargs_SI: 167: initial_value=10000 test_mon_injectargs_SI: 168: ceph daemon mon.a config set mon_pg_warn_min_objects 10 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "success": "mon_pg_warn_min_objects = '10' "} test_mon_injectargs_SI: 169: expect_config_value mon.a mon_pg_warn_min_objects 10 expect_config_value: 146: local target config_opt expected_val val expect_config_value: 147: target=mon.a expect_config_value: 148: config_opt=mon_pg_warn_min_objects expect_config_value: 149: expected_val=10 expect_config_value: 151: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "10"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"10"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:10 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:10 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=10 get_config_value_or_die: 140: echo 10 get_config_value_or_die: 141: return 0 expect_config_value: 151: val=10 expect_config_value: 153: [[ 10 != \1\0 ]] test_mon_injectargs_SI: 170: ceph daemon mon.a config set mon_pg_warn_min_objects 10K *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "success": "mon_pg_warn_min_objects = '10240' "} test_mon_injectargs_SI: 171: expect_config_value mon.a mon_pg_warn_min_objects 10240 expect_config_value: 146: local target config_opt expected_val val expect_config_value: 147: target=mon.a expect_config_value: 148: config_opt=mon_pg_warn_min_objects expect_config_value: 149: expected_val=10240 expect_config_value: 151: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "10240"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"10240"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:10240 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:10240 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=10240 get_config_value_or_die: 140: echo 10240 get_config_value_or_die: 141: return 0 expect_config_value: 151: val=10240 expect_config_value: 153: [[ 10240 != \1\0\2\4\0 ]] test_mon_injectargs_SI: 172: ceph daemon mon.a config set mon_pg_warn_min_objects 1G *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "success": "mon_pg_warn_min_objects = '1073741824' "} test_mon_injectargs_SI: 173: expect_config_value mon.a mon_pg_warn_min_objects 1073741824 expect_config_value: 146: local target config_opt expected_val val expect_config_value: 147: target=mon.a expect_config_value: 148: config_opt=mon_pg_warn_min_objects expect_config_value: 149: expected_val=1073741824 expect_config_value: 151: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "1073741824"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"1073741824"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:1073741824 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:1073741824 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=1073741824 get_config_value_or_die: 140: echo 1073741824 get_config_value_or_die: 141: return 0 expect_config_value: 151: val=1073741824 expect_config_value: 153: [[ 1073741824 != \1\0\7\3\7\4\1\8\2\4 ]] test_mon_injectargs_SI: 174: ceph daemon mon.a config set mon_pg_warn_min_objects 10F *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_injectargs_SI: 175: check_response ''\''10F'\'': (22) Invalid argument' check_response: 108: expected_stderr_string=''\''10F'\'': (22) Invalid argument' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep ''\''10F'\'': (22) Invalid argument' /tmp/cephtool30851/test_invalid.30851 test_mon_injectargs_SI: 177: ceph tell mon.a injectargs '--mon_pg_warn_min_objects 10' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** injectargs:mon_pg_warn_min_objects = '10' test_mon_injectargs_SI: 178: expect_config_value mon.a mon_pg_warn_min_objects 10 expect_config_value: 146: local target config_opt expected_val val expect_config_value: 147: target=mon.a expect_config_value: 148: config_opt=mon_pg_warn_min_objects expect_config_value: 149: expected_val=10 expect_config_value: 151: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "10"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"10"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:10 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:10 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=10 get_config_value_or_die: 140: echo 10 get_config_value_or_die: 141: return 0 expect_config_value: 151: val=10 expect_config_value: 153: [[ 10 != \1\0 ]] test_mon_injectargs_SI: 179: ceph tell mon.a injectargs '--mon_pg_warn_min_objects 10K' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** injectargs:mon_pg_warn_min_objects = '10240' test_mon_injectargs_SI: 180: expect_config_value mon.a mon_pg_warn_min_objects 10240 expect_config_value: 146: local target config_opt expected_val val expect_config_value: 147: target=mon.a expect_config_value: 148: config_opt=mon_pg_warn_min_objects expect_config_value: 149: expected_val=10240 expect_config_value: 151: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "10240"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"10240"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:10240 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:10240 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=10240 get_config_value_or_die: 140: echo 10240 get_config_value_or_die: 141: return 0 expect_config_value: 151: val=10240 expect_config_value: 153: [[ 10240 != \1\0\2\4\0 ]] test_mon_injectargs_SI: 181: ceph tell mon.a injectargs '--mon_pg_warn_min_objects 1G' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** injectargs:mon_pg_warn_min_objects = '1073741824' test_mon_injectargs_SI: 182: expect_config_value mon.a mon_pg_warn_min_objects 1073741824 expect_config_value: 146: local target config_opt expected_val val expect_config_value: 147: target=mon.a expect_config_value: 148: config_opt=mon_pg_warn_min_objects expect_config_value: 149: expected_val=1073741824 expect_config_value: 151: get_config_value_or_die mon.a mon_pg_warn_min_objects get_config_value_or_die: 126: local target config_opt raw val get_config_value_or_die: 128: target=mon.a get_config_value_or_die: 129: config_opt=mon_pg_warn_min_objects get_config_value_or_die: 131: ceph daemon mon.a config get mon_pg_warn_min_objects get_config_value_or_die: 131: raw='{ "mon_pg_warn_min_objects": "1073741824"}' get_config_value_or_die: 132: [[ 0 -ne 0 ]] get_config_value_or_die: 137: echo '{' '"mon_pg_warn_min_objects":' '"1073741824"}' get_config_value_or_die: 137: sed -e 's/[{} "]//g' get_config_value_or_die: 137: raw=mon_pg_warn_min_objects:1073741824 get_config_value_or_die: 138: echo mon_pg_warn_min_objects:1073741824 get_config_value_or_die: 138: cut -f2 -d: get_config_value_or_die: 138: val=1073741824 get_config_value_or_die: 140: echo 1073741824 get_config_value_or_die: 141: return 0 expect_config_value: 151: val=1073741824 expect_config_value: 153: [[ 1073741824 != \1\0\7\3\7\4\1\8\2\4 ]] test_mon_injectargs_SI: 183: expect_false ceph injectargs mon.a '--mon_pg_warn_min_objects 10F' expect_false: 45: set -x expect_false: 46: ceph injectargs mon.a '--mon_pg_warn_min_objects 10F' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: injectargs:Parse error setting mon_pg_warn_min_objects to '10F' using injectargs. failed to parse arguments: mon.a expect_false: 46: return 0 test_mon_injectargs_SI: 184: ceph daemon mon.a config set mon_pg_warn_min_objects 10000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "success": "mon_pg_warn_min_objects = '10000' "} : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_tiering test_tiering: 190: ceph osd pool create slow 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'slow' created test_tiering: 191: ceph osd pool create slow2 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'slow2' created test_tiering: 192: ceph osd pool create cache 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache' created test_tiering: 193: ceph osd pool create cache2 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache2' created test_tiering: 194: ceph osd tier add slow cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache' is now (or already was) a tier of 'slow' test_tiering: 195: ceph osd tier add slow cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache2' is now (or already was) a tier of 'slow' test_tiering: 196: expect_false ceph osd tier add slow2 cache expect_false: 45: set -x expect_false: 46: ceph osd tier add slow2 cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: tier pool 'cache' is already a tier of 'slow' expect_false: 46: return 0 test_tiering: 198: ceph osd tier cache-mode cache writeback *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to writeback test_tiering: 199: ceph osd tier cache-mode cache forward *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to forward test_tiering: 200: ceph osd tier cache-mode cache readonly *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to readonly test_tiering: 201: ceph osd tier cache-mode cache forward *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to forward test_tiering: 202: ceph osd tier cache-mode cache none *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to none test_tiering: 203: ceph osd tier cache-mode cache writeback *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to writeback test_tiering: 204: expect_false ceph osd tier cache-mode cache none expect_false: 45: set -x expect_false: 46: ceph osd tier cache-mode cache none *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: unable to set cache-mode 'none' on a 'writeback' pool; only 'forward','readforward' allowed. expect_false: 46: return 0 test_tiering: 205: expect_false ceph osd tier cache-mode cache readonly expect_false: 45: set -x expect_false: 46: ceph osd tier cache-mode cache readonly *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: unable to set cache-mode 'readonly' on a 'writeback' pool; only 'forward','readforward' allowed. expect_false: 46: return 0 test_tiering: 208: rados -p cache put /etc/passwd /etc/passwd test_tiering: 209: ceph tell 'osd.*' flush_pg_stats *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 211: ceph osd tier cache-mode cache forward *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to forward test_tiering: 212: expect_false ceph osd tier cache-mode cache none expect_false: 45: set -x expect_false: 46: ceph osd tier cache-mode cache none *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EBUSY: unable to set cache-mode 'none' on pool 'cache': dirty objects found expect_false: 46: return 0 test_tiering: 213: expect_false ceph osd tier cache-mode cache readonly expect_false: 45: set -x expect_false: 46: ceph osd tier cache-mode cache readonly *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EBUSY: unable to set cache-mode 'readonly' on pool 'cache': dirty objects found expect_false: 46: return 0 test_tiering: 214: ceph osd tier cache-mode cache writeback *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to writeback test_tiering: 216: rados -p cache rm /etc/passwd test_tiering: 217: rados -p cache cache-flush-evict-all /etc/passwd test_tiering: 218: ceph tell 'osd.*' flush_pg_stats *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 220: ceph osd tier cache-mode cache forward *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to forward test_tiering: 221: ceph osd tier cache-mode cache none *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to none test_tiering: 222: ceph osd tier cache-mode cache readonly *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to readonly test_tiering: 223: TRIES=0 test_tiering: 224: ceph osd pool set cache pg_num 3 --yes-i-really-mean-it test_tiering: 231: expect_false ceph osd pool set cache pg_num 4 expect_false: 45: set -x expect_false: 46: ceph osd pool set cache pg_num 4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EPERM: splits in cache pools must be followed by scrubs and leave sufficient free space to avoid overfilling. use --yes-i-really-mean-it to force. expect_false: 46: return 0 test_tiering: 232: ceph osd tier cache-mode cache none *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'cache' to none test_tiering: 233: ceph osd tier set-overlay slow cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** overlay for 'slow' is now (or already was) 'cache' test_tiering: 234: expect_false ceph osd tier set-overlay slow cache2 expect_false: 45: set -x expect_false: 46: ceph osd tier set-overlay slow cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: pool 'slow' has overlay 'cache'; please remove-overlay first expect_false: 46: return 0 test_tiering: 235: expect_false ceph osd tier remove slow cache expect_false: 45: set -x expect_false: 46: ceph osd tier remove slow cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EBUSY: tier pool 'cache' is the overlay for 'slow'; please remove-overlay first expect_false: 46: return 0 test_tiering: 236: ceph osd tier remove-overlay slow *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** there is now (or already was) no overlay for 'slow' test_tiering: 237: ceph osd tier set-overlay slow cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** overlay for 'slow' is now (or already was) 'cache2' test_tiering: 238: ceph osd tier remove-overlay slow *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** there is now (or already was) no overlay for 'slow' test_tiering: 239: ceph osd tier remove slow cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache' is now (or already was) not a tier of 'slow' test_tiering: 240: ceph osd tier add slow2 cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache' is now (or already was) a tier of 'slow2' test_tiering: 241: expect_false ceph osd tier set-overlay slow cache expect_false: 45: set -x expect_false: 46: ceph osd tier set-overlay slow cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: tier pool 'cache' is not a tier of 'slow' expect_false: 46: return 0 test_tiering: 242: ceph osd tier set-overlay slow2 cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** overlay for 'slow2' is now (or already was) 'cache' test_tiering: 243: ceph osd tier remove-overlay slow2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** there is now (or already was) no overlay for 'slow2' test_tiering: 244: ceph osd tier remove slow2 cache *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache' is now (or already was) not a tier of 'slow2' test_tiering: 245: ceph osd tier remove slow cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache2' is now (or already was) not a tier of 'slow' test_tiering: 248: rados -p cache2 put /etc/passwd /etc/passwd test_tiering: 249: ceph df test_tiering: 249: grep cache2 test_tiering: 249: grep ' 1 ' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 250: echo waiting for pg stats to flush waiting for pg stats to flush test_tiering: 251: sleep 2 test_tiering: 249: ceph df test_tiering: 249: grep cache2 test_tiering: 249: grep ' 1 ' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cache2 6 1147 0 38893M 1 test_tiering: 253: expect_false ceph osd tier add slow cache2 expect_false: 45: set -x expect_false: 46: ceph osd tier add slow cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error ENOTEMPTY: tier pool 'cache2' is not empty; --force-nonempty to force expect_false: 46: return 0 test_tiering: 254: ceph osd tier add slow cache2 --force-nonempty *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache2' is now (or already was) a tier of 'slow' test_tiering: 255: ceph osd tier remove slow cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache2' is now (or already was) not a tier of 'slow' test_tiering: 257: ceph osd pool ls test_tiering: 257: grep cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cache2 test_tiering: 258: ceph osd pool ls -f json-pretty test_tiering: 258: grep cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "cache2"] test_tiering: 259: ceph osd pool ls detail test_tiering: 259: grep cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 6 'cache2' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 56 flags hashpspool stripe_width 0 test_tiering: 260: ceph osd pool ls detail -f json-pretty test_tiering: 260: grep cache2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "pool_name": "cache2", test_tiering: 262: ceph osd pool delete cache cache --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache' removed test_tiering: 263: ceph osd pool delete cache2 cache2 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache2' removed test_tiering: 266: ceph osd pool create cache3 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache3' created test_tiering: 267: ceph osd tier add-cache slow cache3 1024000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache3' is now (or already was) a cache tier of 'slow' test_tiering: 268: ceph osd dump test_tiering: 268: grep cache3 test_tiering: 268: grep bloom test_tiering: 268: grep 'false_positive_probability: 0.05' test_tiering: 268: grep 'target_bytes 1024000' test_tiering: 268: grep '1200s x4' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 7 'cache3' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 61 flags hashpspool tier_of 3 cache_mode writeback target_bytes 1024000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1200s x4 min_read_recency_for_promote 1 stripe_width 0 test_tiering: 269: ceph osd tier remove slow cache3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache3' is now (or already was) not a tier of 'slow' test_tiering: 270: ceph osd pool ls test_tiering: 270: grep cache3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cache3 test_tiering: 271: ceph osd pool delete cache3 cache3 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache3' removed test_tiering: 272: ceph osd pool ls test_tiering: 272: grep cache3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 274: ceph osd pool delete slow2 slow2 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'slow2' removed test_tiering: 275: ceph osd pool delete slow slow --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'slow' removed test_tiering: 278: ceph osd pool create datapool 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'datapool' created test_tiering: 279: ceph osd pool create cachepool 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cachepool' created test_tiering: 280: ceph osd tier add-cache datapool cachepool 1024000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cachepool' is now (or already was) a cache tier of 'datapool' test_tiering: 281: ceph osd pool delete cachepool cachepool --yes-i-really-really-mean-it test_tiering: 281: true test_tiering: 282: check_response 'EBUSY: pool '\''cachepool'\'' is a tier of '\''datapool'\''' check_response: 108: expected_stderr_string='EBUSY: pool '\''cachepool'\'' is a tier of '\''datapool'\''' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EBUSY: pool '\''cachepool'\'' is a tier of '\''datapool'\''' /tmp/cephtool30851/test_invalid.30851 test_tiering: 283: ceph osd pool delete datapool datapool --yes-i-really-really-mean-it test_tiering: 283: true test_tiering: 284: check_response 'EBUSY: pool '\''datapool'\'' has tiers cachepool' check_response: 108: expected_stderr_string='EBUSY: pool '\''datapool'\'' has tiers cachepool' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EBUSY: pool '\''datapool'\'' has tiers cachepool' /tmp/cephtool30851/test_invalid.30851 test_tiering: 285: ceph osd tier remove datapool cachepool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cachepool' is now (or already was) not a tier of 'datapool' test_tiering: 286: ceph osd pool delete cachepool cachepool --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cachepool' removed test_tiering: 287: ceph osd pool delete datapool datapool --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'datapool' removed test_tiering: 290: ceph osd pool create datapool 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'datapool' created test_tiering: 291: ceph osd pool create cache4 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache4' created test_tiering: 292: ceph osd tier add datapool cache4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache4' is now (or already was) a tier of 'datapool' test_tiering: 293: ceph osd pool set cache4 target_max_objects 5 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 11 target_max_objects to 5 test_tiering: 294: ceph osd pool set cache4 target_max_bytes 1000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set pool 11 target_max_bytes to 1000 test_tiering: 188: seq 1 5 test_tiering: 295: for f in '`seq 1 5`' test_tiering: 296: rados -p cache4 put foo1 /etc/passwd test_tiering: 295: for f in '`seq 1 5`' test_tiering: 296: rados -p cache4 put foo2 /etc/passwd test_tiering: 295: for f in '`seq 1 5`' test_tiering: 296: rados -p cache4 put foo3 /etc/passwd test_tiering: 295: for f in '`seq 1 5`' test_tiering: 296: rados -p cache4 put foo4 /etc/passwd test_tiering: 295: for f in '`seq 1 5`' test_tiering: 296: rados -p cache4 put foo5 /etc/passwd test_tiering: 298: ceph df test_tiering: 298: grep cache4 test_tiering: 298: grep ' 5 ' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 299: echo waiting for pg stats to flush waiting for pg stats to flush test_tiering: 300: sleep 2 test_tiering: 298: ceph df test_tiering: 298: grep cache4 test_tiering: 298: grep ' 5 ' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 299: echo waiting for pg stats to flush waiting for pg stats to flush test_tiering: 300: sleep 2 test_tiering: 298: ceph df test_tiering: 298: grep cache4 test_tiering: 298: grep ' 5 ' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cache4 11 5735 0 38888M 5 test_tiering: 302: ceph health test_tiering: 302: grep WARN test_tiering: 302: grep cache4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** HEALTH_WARN 'cache4' at/near target max; too few pgs per osd (9 < min 10) test_tiering: 303: ceph health detail test_tiering: 303: grep cache4 test_tiering: 303: grep 'target max' test_tiering: 303: grep objects *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cache pool 'cache4' with 5 objects at/near target max 5 objects test_tiering: 304: ceph health detail test_tiering: 304: grep cache4 test_tiering: 304: grep 'target max' test_tiering: 304: grep B *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cache pool 'cache4' with 5735B at/near target max 1000B test_tiering: 305: ceph osd tier remove datapool cache4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache4' is now (or already was) not a tier of 'datapool' test_tiering: 306: ceph osd pool delete cache4 cache4 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache4' removed test_tiering: 307: ceph osd pool delete datapool datapool --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'datapool' removed test_tiering: 313: ceph osd pool create basepoolA 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'basepoolA' created test_tiering: 314: ceph osd pool create basepoolB 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'basepoolB' created test_tiering: 315: ceph osd dump test_tiering: 315: grep 'pool.*basepoolA' test_tiering: 315: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 315: poolA_id=12 test_tiering: 316: ceph osd dump test_tiering: 316: grep 'pool.*basepoolB' test_tiering: 316: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 316: poolB_id=13 test_tiering: 318: ceph osd pool create cache5 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache5' created test_tiering: 319: ceph osd pool create cache6 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache6' created test_tiering: 320: ceph osd tier add basepoolA cache5 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache5' is now (or already was) a tier of 'basepoolA' test_tiering: 321: ceph osd tier add basepoolB cache6 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache6' is now (or already was) a tier of 'basepoolB' test_tiering: 322: ceph osd tier remove basepoolB cache5 test_tiering: 322: grep 'not a tier of' pool 'cache5' is now (or already was) not a tier of 'basepoolB' test_tiering: 323: ceph osd dump test_tiering: 323: grep 'pool.*'\''cache5'\''' test_tiering: 323: grep 'tier_of[ \t]\+12' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 14 'cache5' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 92 flags hashpspool tier_of 12 stripe_width 0 test_tiering: 324: ceph osd tier remove basepoolA cache6 test_tiering: 324: grep 'not a tier of' pool 'cache6' is now (or already was) not a tier of 'basepoolA' test_tiering: 325: ceph osd dump test_tiering: 325: grep 'pool.*'\''cache6'\''' test_tiering: 325: grep 'tier_of[ \t]\+13' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 15 'cache6' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 93 flags hashpspool tier_of 13 stripe_width 0 test_tiering: 327: ceph osd tier remove basepoolA cache5 test_tiering: 327: grep 'not a tier of' pool 'cache5' is now (or already was) not a tier of 'basepoolA' test_tiering: 328: ceph osd dump test_tiering: 328: grep 'pool.*'\''cache5'\''' test_tiering: 328: grep tier_of *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 329: ceph osd tier remove basepoolB cache6 test_tiering: 329: grep 'not a tier of' pool 'cache6' is now (or already was) not a tier of 'basepoolB' test_tiering: 330: ceph osd dump test_tiering: 330: grep 'pool.*'\''cache6'\''' test_tiering: 330: grep tier_of *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 332: ceph osd dump test_tiering: 332: grep 'pool.*'\''basepoolA'\''' test_tiering: 332: grep tiers *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 333: ceph osd dump test_tiering: 333: grep 'pool.*'\''basepoolB'\''' test_tiering: 333: grep tiers *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_tiering: 335: ceph osd pool delete cache6 cache6 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache6' removed test_tiering: 336: ceph osd pool delete cache5 cache5 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache5' removed test_tiering: 337: ceph osd pool delete basepoolB basepoolB --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'basepoolB' removed test_tiering: 338: ceph osd pool delete basepoolA basepoolA --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'basepoolA' removed : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_auth test_auth: 343: ceph auth add client.xx mon allow osd 'allow *' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for client.xx test_auth: 344: ceph auth export client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** export auth(auid = 18446744073709551615 key=AQDeHTVUCAiOGxAA1pkmLkVJyUYzvhIyL/jN4g== with 2 caps) test_auth: 345: ceph auth add client.xx -i client.xx.keyring *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_auth: 346: rm -f client.xx.keyring test_auth: 347: ceph auth list test_auth: 347: grep client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** installed auth entries: client.xx test_auth: 348: ceph auth get client.xx test_auth: 348: grep caps test_auth: 348: grep mon *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported keyring for client.xx caps mon = "allow" test_auth: 349: ceph auth get client.xx test_auth: 349: grep caps test_auth: 349: grep osd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported keyring for client.xx caps osd = "allow *" test_auth: 350: ceph auth get-key client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** AQDeHTVUCAiOGxAA1pkmLkVJyUYzvhIyL/jN4g== test_auth: 351: ceph auth print-key client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** AQDeHTVUCAiOGxAA1pkmLkVJyUYzvhIyL/jN4g== test_auth: 352: ceph auth print_key client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** AQDeHTVUCAiOGxAA1pkmLkVJyUYzvhIyL/jN4g== test_auth: 353: ceph auth caps client.xx osd 'allow rw' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** updated caps for client.xx test_auth: 354: expect_false 'ceph auth get client.xx | grep caps | grep mon' expect_false: 45: set -x expect_false: 46: 'ceph auth get client.xx | grep caps | grep mon' ../qa/workunits/cephtool/test.sh: line 46: ceph auth get client.xx | grep caps | grep mon: command not found expect_false: 46: return 0 test_auth: 355: ceph auth get client.xx test_auth: 355: grep osd test_auth: 355: grep 'allow rw' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported keyring for client.xx caps osd = "allow rw" test_auth: 356: ceph auth export test_auth: 356: grep client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported master keyring [client.xx] test_auth: 357: ceph auth export -o authfile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported master keyring test_auth: 358: ceph auth import -i authfile *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** imported keyring test_auth: 359: ceph auth export -o authfile2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported master keyring test_auth: 360: diff authfile authfile2 test_auth: 361: rm authfile authfile2 test_auth: 362: ceph auth del client.xx *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** updated : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_auth_profiles test_auth_profiles: 367: ceph auth add client.xx-profile-ro mon 'allow profile read-only' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for client.xx-profile-ro test_auth_profiles: 368: ceph auth add client.xx-profile-rw mon 'allow profile read-write' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for client.xx-profile-rw test_auth_profiles: 369: ceph auth add client.xx-profile-rd mon 'allow profile role-definer' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for client.xx-profile-rd test_auth_profiles: 371: ceph auth export *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported master keyring test_auth_profiles: 374: ceph -n client.xx-profile-ro -k client.xx.keyring status *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cluster e12b5778-21f1-4b32-9b0e-f5f1d48eeafe health HEALTH_WARN too few pgs per osd (8 < min 10); mon.a low disk space monmap e1: 1 mons at {a=127.0.0.1:6789/0}, election epoch 2, quorum 0 a mdsmap e6: 1/1/1 up {0=a=up:active} osdmap e99: 3 osds: 3 up, 3 in pgmap v128: 24 pgs, 3 pools, 1902 bytes data, 20 objects 441 GB used, 113 GB / 555 GB avail 24 active+clean test_auth_profiles: 375: ceph -n client.xx-profile-ro -k client.xx.keyring osd dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** epoch 99 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe created 2014-10-08 11:19:00.929877 modified 2014-10-08 11:19:57.743037 flags pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 1 flags hashpspool stripe_width 0 pool 1 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 flags hashpspool crash_replay_interval 45 stripe_width 0 pool 2 'cephfs_metadata' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 18 flags hashpspool stripe_width 0 max_osd 3 osd.0 up in weight 1 up_from 4 up_thru 84 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/30001 127.0.0.1:6801/30001 127.0.0.1:6802/30001 127.0.0.1:6803/30001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 osd.1 up in weight 1 up_from 8 up_thru 90 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/30226 127.0.0.1:6805/30226 127.0.0.1:6806/30226 127.0.0.1:6807/30226 exists,up d1c362f0-26d3-43c3-842a-16ba0f1ccfbf osd.2 up in weight 1 up_from 13 up_thru 90 down_at 0 last_clean_interval [0,0) 127.0.0.1:6808/30476 127.0.0.1:6809/30476 127.0.0.1:6810/30476 127.0.0.1:6811/30476 exists,up 0c1200ff-b9ab-4829-b942-4112257d80f1 test_auth_profiles: 376: ceph -n client.xx-profile-ro -k client.xx.keyring pg dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped all in format plain version 128 stamp 2014-10-08 11:20:02.631170 last_osdmap_epoch 99 last_pg_scan 99 full_ratio 0.95 nearfull_ratio 0.85 pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp 0.7 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.731907 0'0 99:88 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:03.609236 0'0 2014-10-08 11:19:03.609236 1.6 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.261836 0'0 99:75 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.165949 0'0 2014-10-08 11:19:05.165949 2.5 3 0 0 0 0 46 3 3 active+clean 2014-10-08 11:19:05.563892 20'3 99:77 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.536929 0'0 2014-10-08 11:19:05.536929 0.6 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.737779 0'0 99:99 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:02.364156 0'0 2014-10-08 11:19:02.364156 1.7 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.251958 0'0 99:75 [1,2,0] 1 [1,2,0] 1 0'0 2014-10-08 11:19:05.166647 0'0 2014-10-08 11:19:05.166647 2.4 5 0 0 0 0 1326 5 5 active+clean 2014-10-08 11:19:05.561902 20'5 99:82 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.536338 0'0 2014-10-08 11:19:05.536338 2.7 4 0 0 0 0 474 4 4 active+clean 2014-10-08 11:19:05.565582 20'4 99:80 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.538174 0'0 2014-10-08 11:19:05.538174 0.5 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.731156 0'0 99:99 [0,2,1] 0 [0,2,1] 0 0'0 2014-10-08 11:19:02.363085 0'0 2014-10-08 11:19:02.363085 1.4 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.250182 0'0 99:75 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.165309 0'0 2014-10-08 11:19:05.165309 0.4 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.729629 0'0 99:99 [0,2,1] 0 [0,2,1] 0 0'0 2014-10-08 11:19:02.361996 0'0 2014-10-08 11:19:02.361996 1.5 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.187409 0'0 99:72 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:05.165941 0'0 2014-10-08 11:19:05.165941 2.6 1 0 0 0 0 0 1 1 active+clean 2014-10-08 11:19:05.557508 20'1 99:74 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.537530 0'0 2014-10-08 11:19:05.537530 1.2 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.189936 0'0 99:74 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:05.162556 0'0 2014-10-08 11:19:05.162556 0.3 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.739303 0'0 99:74 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:02.360847 0'0 2014-10-08 11:19:02.360847 2.1 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.566046 0'0 99:70 [2,1,0] 2 [2,1,0] 2 0'0 2014-10-08 11:19:05.539239 0'0 2014-10-08 11:19:05.539239 0.2 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.736402 0'0 99:99 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:02.359778 0'0 2014-10-08 11:19:02.359778 1.3 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.244540 0'0 99:75 [1,2,0] 1 [1,2,0] 1 0'0 2014-10-08 11:19:05.164663 0'0 2014-10-08 11:19:05.164663 2.0 4 0 0 0 0 0 4 4 active+clean 2014-10-08 11:19:05.559710 20'4 99:74 [2,1,0] 2 [2,1,0] 2 0'0 2014-10-08 11:19:05.538615 0'0 2014-10-08 11:19:05.538615 1.0 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.188521 0'0 99:75 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.163941 0'0 2014-10-08 11:19:05.163941 0.1 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.737703 0'0 99:74 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:03.608996 0'0 2014-10-08 11:19:03.608996 2.3 3 0 0 0 0 56 3 3 active+clean 2014-10-08 11:19:05.557082 20'3 99:79 [1,2,0] 1 [1,2,0] 1 0'0 2014-10-08 11:19:05.535704 0'0 2014-10-08 11:19:05.535704 1.1 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.186442 0'0 99:72 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:05.165238 0'0 2014-10-08 11:19:05.165238 0.0 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.725400 0'0 99:99 [0,2,1] 0 [0,2,1] 0 0'0 2014-10-08 11:19:02.356911 0'0 2014-10-08 11:19:02.356911 2.2 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.559271 0'0 99:72 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:05.539730 0'0 2014-10-08 11:19:05.539730 pool 0 0 0 0 0 0 0 0 pool 1 0 0 0 0 0 0 0 pool 2 20 0 0 0 1902 20 20 sum 20 0 0 0 1902 20 20 osdstat kbused kbavail kb hb in hb out 0 154399648 39819684 194219332 [1,2] [] 1 154399616 39819716 194219332 [0,2] [] 2 154399808 39819524 194219332 [0,1] [] sum 463199072 119458924 582657996 test_auth_profiles: 377: ceph -n client.xx-profile-ro -k client.xx.keyring mon dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped monmap epoch 1 epoch 1 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe last_changed 2014-10-08 11:19:00.829984 created 2014-10-08 11:19:00.829984 0: 127.0.0.1:6789/0 mon.a test_auth_profiles: 378: ceph -n client.xx-profile-ro -k client.xx.keyring mds dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 6 epoch 6 flags 0 created 2014-10-08 11:19:05.911699 modified 2014-10-08 11:19:06.852979 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table} max_mds 1 in 0 up {0=4113} failed stopped data_pools 1 metadata_pool 2 inline_data disabled 4113: 127.0.0.1:6812/30768 'a' mds.0.1 up:active seq 2 test_auth_profiles: 380: ceph -n client.xx-profile-ro -k client.xx.keyring log foo test_auth_profiles: 380: true test_auth_profiles: 381: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 382: ceph -n client.xx-profile-ro -k client.xx.keyring osd set noout test_auth_profiles: 382: true test_auth_profiles: 383: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 384: ceph -n client.xx-profile-ro -k client.xx.keyring auth list test_auth_profiles: 384: true test_auth_profiles: 385: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 388: ceph -n client.xx-profile-rw -k client.xx.keyring status *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cluster e12b5778-21f1-4b32-9b0e-f5f1d48eeafe health HEALTH_WARN too few pgs per osd (8 < min 10); mon.a low disk space monmap e1: 1 mons at {a=127.0.0.1:6789/0}, election epoch 2, quorum 0 a mdsmap e6: 1/1/1 up {0=a=up:active} osdmap e99: 3 osds: 3 up, 3 in pgmap v128: 24 pgs, 3 pools, 1902 bytes data, 20 objects 441 GB used, 113 GB / 555 GB avail 24 active+clean test_auth_profiles: 389: ceph -n client.xx-profile-rw -k client.xx.keyring osd dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** epoch 99 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe created 2014-10-08 11:19:00.929877 modified 2014-10-08 11:19:57.743037 flags pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 1 flags hashpspool stripe_width 0 pool 1 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 flags hashpspool crash_replay_interval 45 stripe_width 0 pool 2 'cephfs_metadata' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 18 flags hashpspool stripe_width 0 max_osd 3 osd.0 up in weight 1 up_from 4 up_thru 84 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/30001 127.0.0.1:6801/30001 127.0.0.1:6802/30001 127.0.0.1:6803/30001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 osd.1 up in weight 1 up_from 8 up_thru 90 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/30226 127.0.0.1:6805/30226 127.0.0.1:6806/30226 127.0.0.1:6807/30226 exists,up d1c362f0-26d3-43c3-842a-16ba0f1ccfbf osd.2 up in weight 1 up_from 13 up_thru 90 down_at 0 last_clean_interval [0,0) 127.0.0.1:6808/30476 127.0.0.1:6809/30476 127.0.0.1:6810/30476 127.0.0.1:6811/30476 exists,up 0c1200ff-b9ab-4829-b942-4112257d80f1 test_auth_profiles: 390: ceph -n client.xx-profile-rw -k client.xx.keyring pg dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped all in format plain version 128 stamp 2014-10-08 11:20:02.631170 last_osdmap_epoch 99 last_pg_scan 99 full_ratio 0.95 nearfull_ratio 0.85 pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp 0.7 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.731907 0'0 99:88 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:03.609236 0'0 2014-10-08 11:19:03.609236 1.6 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.261836 0'0 99:75 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.165949 0'0 2014-10-08 11:19:05.165949 2.5 3 0 0 0 0 46 3 3 active+clean 2014-10-08 11:19:05.563892 20'3 99:77 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.536929 0'0 2014-10-08 11:19:05.536929 0.6 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.737779 0'0 99:99 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:02.364156 0'0 2014-10-08 11:19:02.364156 1.7 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.251958 0'0 99:75 [1,2,0] 1 [1,2,0] 1 0'0 2014-10-08 11:19:05.166647 0'0 2014-10-08 11:19:05.166647 2.4 5 0 0 0 0 1326 5 5 active+clean 2014-10-08 11:19:05.561902 20'5 99:82 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.536338 0'0 2014-10-08 11:19:05.536338 2.7 4 0 0 0 0 474 4 4 active+clean 2014-10-08 11:19:05.565582 20'4 99:80 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.538174 0'0 2014-10-08 11:19:05.538174 0.5 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.731156 0'0 99:99 [0,2,1] 0 [0,2,1] 0 0'0 2014-10-08 11:19:02.363085 0'0 2014-10-08 11:19:02.363085 1.4 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.250182 0'0 99:75 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.165309 0'0 2014-10-08 11:19:05.165309 0.4 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.729629 0'0 99:99 [0,2,1] 0 [0,2,1] 0 0'0 2014-10-08 11:19:02.361996 0'0 2014-10-08 11:19:02.361996 1.5 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.187409 0'0 99:72 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:05.165941 0'0 2014-10-08 11:19:05.165941 2.6 1 0 0 0 0 0 1 1 active+clean 2014-10-08 11:19:05.557508 20'1 99:74 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.537530 0'0 2014-10-08 11:19:05.537530 1.2 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.189936 0'0 99:74 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:05.162556 0'0 2014-10-08 11:19:05.162556 0.3 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.739303 0'0 99:74 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:02.360847 0'0 2014-10-08 11:19:02.360847 2.1 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.566046 0'0 99:70 [2,1,0] 2 [2,1,0] 2 0'0 2014-10-08 11:19:05.539239 0'0 2014-10-08 11:19:05.539239 0.2 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.736402 0'0 99:99 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:02.359778 0'0 2014-10-08 11:19:02.359778 1.3 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.244540 0'0 99:75 [1,2,0] 1 [1,2,0] 1 0'0 2014-10-08 11:19:05.164663 0'0 2014-10-08 11:19:05.164663 2.0 4 0 0 0 0 0 4 4 active+clean 2014-10-08 11:19:05.559710 20'4 99:74 [2,1,0] 2 [2,1,0] 2 0'0 2014-10-08 11:19:05.538615 0'0 2014-10-08 11:19:05.538615 1.0 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.188521 0'0 99:75 [1,0,2] 1 [1,0,2] 1 0'0 2014-10-08 11:19:05.163941 0'0 2014-10-08 11:19:05.163941 0.1 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.737703 0'0 99:74 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:03.608996 0'0 2014-10-08 11:19:03.608996 2.3 3 0 0 0 0 56 3 3 active+clean 2014-10-08 11:19:05.557082 20'3 99:79 [1,2,0] 1 [1,2,0] 1 0'0 2014-10-08 11:19:05.535704 0'0 2014-10-08 11:19:05.535704 1.1 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.186442 0'0 99:72 [2,0,1] 2 [2,0,1] 2 0'0 2014-10-08 11:19:05.165238 0'0 2014-10-08 11:19:05.165238 0.0 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:04.725400 0'0 99:99 [0,2,1] 0 [0,2,1] 0 0'0 2014-10-08 11:19:02.356911 0'0 2014-10-08 11:19:02.356911 2.2 0 0 0 0 0 0 0 0 active+clean 2014-10-08 11:19:05.559271 0'0 99:72 [0,1,2] 0 [0,1,2] 0 0'0 2014-10-08 11:19:05.539730 0'0 2014-10-08 11:19:05.539730 pool 0 0 0 0 0 0 0 0 pool 1 0 0 0 0 0 0 0 pool 2 20 0 0 0 1902 20 20 sum 20 0 0 0 1902 20 20 osdstat kbused kbavail kb hb in hb out 0 154399648 39819684 194219332 [1,2] [] 1 154399616 39819716 194219332 [0,2] [] 2 154399808 39819524 194219332 [0,1] [] sum 463199072 119458924 582657996 test_auth_profiles: 391: ceph -n client.xx-profile-rw -k client.xx.keyring mon dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped monmap epoch 1 epoch 1 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe last_changed 2014-10-08 11:19:00.829984 created 2014-10-08 11:19:00.829984 0: 127.0.0.1:6789/0 mon.a test_auth_profiles: 392: ceph -n client.xx-profile-rw -k client.xx.keyring mds dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 6 epoch 6 flags 0 created 2014-10-08 11:19:05.911699 modified 2014-10-08 11:19:06.852979 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table} max_mds 1 in 0 up {0=4113} failed stopped data_pools 1 metadata_pool 2 inline_data disabled 4113: 127.0.0.1:6812/30768 'a' mds.0.1 up:active seq 2 test_auth_profiles: 393: ceph -n client.xx-profile-rw -k client.xx.keyring log foo *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_auth_profiles: 394: ceph -n client.xx-profile-rw -k client.xx.keyring osd set noout *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set noout test_auth_profiles: 395: ceph -n client.xx-profile-rw -k client.xx.keyring osd unset noout *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset noout test_auth_profiles: 397: ceph -n client.xx-profile-rw -k client.xx.keyring auth list test_auth_profiles: 397: true test_auth_profiles: 398: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 401: ceph -n client.xx-profile-rd -k client.xx.keyring auth list *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** installed auth entries: mds.a key: AQCoHTVUQFZSFxAAZ1YmiXXgiWqIBEMWVPPHyg== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow * osd.0 key: AQClHTVUALGoKBAAssc5Qa1/+n7VF0nwFOB/mg== caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQCmHTVUQJyeORAA14FpJg9JvSGVH9NZXKC72A== caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQCoHTVUmF33AxAA1/Z+VH/uwxanq2pSt5QdaA== caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQCkHTVU0L53MBAAmmjtS7Zqy0FfENMr9fYTXg== caps: [mds] allow * caps: [mon] allow * caps: [osd] allow * client.xx-profile-rd key: AQDkHTVUEHX9JxAAWAjXTSTnYRsRUCfPLKRVAQ== caps: [mon] allow profile role-definer client.xx-profile-ro key: AQDjHTVU4BtdMxAApqbYv2JKo8ZBd0h7DFTz2w== caps: [mon] allow profile read-only client.xx-profile-rw key: AQDkHTVUAHCxEBAAHm/zsk+hJD4KuSZ+gPmAfw== caps: [mon] allow profile read-write test_auth_profiles: 402: ceph -n client.xx-profile-rd -k client.xx.keyring auth export *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** exported master keyring [mds.a] key = AQCoHTVUQFZSFxAAZ1YmiXXgiWqIBEMWVPPHyg== caps mds = "allow" caps mon = "allow profile mds" caps osd = "allow *" [osd.0] key = AQClHTVUALGoKBAAssc5Qa1/+n7VF0nwFOB/mg== caps mon = "allow profile osd" caps osd = "allow *" [osd.1] key = AQCmHTVUQJyeORAA14FpJg9JvSGVH9NZXKC72A== caps mon = "allow profile osd" caps osd = "allow *" [osd.2] key = AQCoHTVUmF33AxAA1/Z+VH/uwxanq2pSt5QdaA== caps mon = "allow profile osd" caps osd = "allow *" [client.admin] key = AQCkHTVU0L53MBAAmmjtS7Zqy0FfENMr9fYTXg== auid = 0 caps mds = "allow *" caps mon = "allow *" caps osd = "allow *" [client.xx-profile-rd] key = AQDkHTVUEHX9JxAAWAjXTSTnYRsRUCfPLKRVAQ== caps mon = "allow profile role-definer" [client.xx-profile-ro] key = AQDjHTVU4BtdMxAApqbYv2JKo8ZBd0h7DFTz2w== caps mon = "allow profile read-only" [client.xx-profile-rw] key = AQDkHTVUAHCxEBAAHm/zsk+hJD4KuSZ+gPmAfw== caps mon = "allow profile read-write" test_auth_profiles: 403: ceph -n client.xx-profile-rd -k client.xx.keyring auth add client.xx-profile-foo *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added key for client.xx-profile-foo test_auth_profiles: 404: ceph -n client.xx-profile-rd -k client.xx.keyring status *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cluster e12b5778-21f1-4b32-9b0e-f5f1d48eeafe health HEALTH_WARN too few pgs per osd (8 < min 10); mon.a low disk space monmap e1: 1 mons at {a=127.0.0.1:6789/0}, election epoch 2, quorum 0 a mdsmap e6: 1/1/1 up {0=a=up:active} osdmap e101: 3 osds: 3 up, 3 in pgmap v131: 24 pgs, 3 pools, 1902 bytes data, 20 objects 441 GB used, 113 GB / 555 GB avail 24 active+clean test_auth_profiles: 405: ceph -n client.xx-profile-rd -k client.xx.keyring osd dump test_auth_profiles: 405: true test_auth_profiles: 406: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 407: ceph -n client.xx-profile-rd -k client.xx.keyring pg dump test_auth_profiles: 407: true test_auth_profiles: 408: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 410: ceph -n client.xx-profile-rd -k client.xx.keyring mon dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped monmap epoch 1 epoch 1 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe last_changed 2014-10-08 11:19:00.829984 created 2014-10-08 11:19:00.829984 0: 127.0.0.1:6789/0 mon.a test_auth_profiles: 412: ceph -n client.xx-profile-rd -k client.xx.keyring mon add foo 1.1.1.1 test_auth_profiles: 412: true test_auth_profiles: 413: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 414: ceph -n client.xx-profile-rd -k client.xx.keyring mds dump test_auth_profiles: 414: true test_auth_profiles: 415: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 416: ceph -n client.xx-profile-rd -k client.xx.keyring log foo test_auth_profiles: 416: true test_auth_profiles: 417: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 418: ceph -n client.xx-profile-rd -k client.xx.keyring osd set noout test_auth_profiles: 418: true test_auth_profiles: 419: check_response 'EACCES: access denied' check_response: 108: expected_stderr_string='EACCES: access denied' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'EACCES: access denied' /tmp/cephtool30851/test_invalid.30851 test_auth_profiles: 421: ceph -n client.xx-profile-rd -k client.xx.keyring auth del client.xx-profile-ro *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** updated test_auth_profiles: 422: ceph -n client.xx-profile-rd -k client.xx.keyring auth del client.xx-profile-rw *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** updated test_auth_profiles: 423: ceph -n client.xx-profile-rd -k client.xx.keyring auth del client.xx-profile-rd *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** updated test_auth_profiles: 424: rm -f client.xx.keyring : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_mon_misc test_mon_misc: 430: ceph osd dump test_mon_misc: 430: grep '^epoch' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** epoch 101 test_mon_misc: 431: ceph --concise osd dump test_mon_misc: 431: grep '^epoch' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** epoch 101 test_mon_misc: 434: ceph df *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_misc: 435: grep GLOBAL /tmp/cephtool30851/test_invalid.30851 GLOBAL: test_mon_misc: 436: grep -v DIRTY /tmp/cephtool30851/test_invalid.30851 GLOBAL: SIZE AVAIL RAW USED %RAW USED 555G 113G 441G 79.50 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 38885M 0 cephfs_data 1 0 0 38885M 0 cephfs_metadata 2 1902 0 38885M 20 test_mon_misc: 437: ceph df detail *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_misc: 438: grep CATEGORY /tmp/cephtool30851/test_invalid.30851 NAME ID CATEGORY USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE test_mon_misc: 439: grep DIRTY /tmp/cephtool30851/test_invalid.30851 NAME ID CATEGORY USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE test_mon_misc: 440: ceph df --format json *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_misc: 441: grep total_bytes /tmp/cephtool30851/test_invalid.30851 {"stats":{"total_bytes":596641787904,"total_used_bytes":474318450688,"total_avail_bytes":122323337216},"pools":[{"name":"rbd","id":0,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40774175552,"objects":0}},{"name":"cephfs_data","id":1,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40774175552,"objects":0}},{"name":"cephfs_metadata","id":2,"stats":{"kb_used":2,"bytes_used":1902,"max_avail":40774175552,"objects":20}}]} test_mon_misc: 442: grep -v dirty /tmp/cephtool30851/test_invalid.30851 {"stats":{"total_bytes":596641787904,"total_used_bytes":474318450688,"total_avail_bytes":122323337216},"pools":[{"name":"rbd","id":0,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40774175552,"objects":0}},{"name":"cephfs_data","id":1,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40774175552,"objects":0}},{"name":"cephfs_metadata","id":2,"stats":{"kb_used":2,"bytes_used":1902,"max_avail":40774175552,"objects":20}}]} test_mon_misc: 443: ceph df detail --format json *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_misc: 444: grep rd_bytes /tmp/cephtool30851/test_invalid.30851 {"stats":{"total_bytes":596641787904,"total_used_bytes":474320859136,"total_avail_bytes":122320928768,"total_objects":20},"pools":[{"name":"rbd","id":0,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40772934464,"objects":0,"dirty":0,"rd":0,"rd_bytes":0,"wr":0,"wr_bytes":0},"categories":[]},{"name":"cephfs_data","id":1,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40772934464,"objects":0,"dirty":0,"rd":0,"rd_bytes":0,"wr":0,"wr_bytes":0},"categories":[]},{"name":"cephfs_metadata","id":2,"stats":{"kb_used":2,"bytes_used":1902,"max_avail":40772934464,"objects":20,"dirty":20,"rd":0,"rd_bytes":0,"wr":21,"wr_bytes":8192},"categories":[]}]} test_mon_misc: 445: grep dirty /tmp/cephtool30851/test_invalid.30851 {"stats":{"total_bytes":596641787904,"total_used_bytes":474320859136,"total_avail_bytes":122320928768,"total_objects":20},"pools":[{"name":"rbd","id":0,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40772934464,"objects":0,"dirty":0,"rd":0,"rd_bytes":0,"wr":0,"wr_bytes":0},"categories":[]},{"name":"cephfs_data","id":1,"stats":{"kb_used":0,"bytes_used":0,"max_avail":40772934464,"objects":0,"dirty":0,"rd":0,"rd_bytes":0,"wr":0,"wr_bytes":0},"categories":[]},{"name":"cephfs_metadata","id":2,"stats":{"kb_used":2,"bytes_used":1902,"max_avail":40772934464,"objects":20,"dirty":20,"rd":0,"rd_bytes":0,"wr":21,"wr_bytes":8192},"categories":[]}]} test_mon_misc: 446: ceph df --format xml test_mon_misc: 446: grep '' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 596641787904474320859136122320928768rbd000407729344640cephfs_data100407729344640cephfs_metadata2219024077293446420 test_mon_misc: 447: ceph df detail --format xml test_mon_misc: 447: grep '' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 59664178790447432085913612232092876820rbd00040772934464000000cephfs_data10040772934464000000cephfs_metadata22190240772934464202000218192 test_mon_misc: 449: ceph fsid *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** e12b5778-21f1-4b32-9b0e-f5f1d48eeafe test_mon_misc: 450: ceph health *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** HEALTH_WARN too few pgs per osd (8 < min 10); mon.a low disk space test_mon_misc: 451: ceph health detail *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** HEALTH_WARN too few pgs per osd (8 < min 10); mon.a low disk space too few pgs per osd (8 < min 10) mon.a low disk space -- 20% avail test_mon_misc: 452: ceph health --format json-pretty *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "health": { "health_services": [ { "mons": [ { "name": "a", "kb_total": 194219332, "kb_used": 154399648, "kb_avail": 39819684, "avail_percent": 20, "last_updated": "2014-10-08 11:20:00.922794", "store_stats": { "bytes_total": 9438454, "bytes_sst": 888, "bytes_log": 9371648, "bytes_misc": 65918, "last_updated": "0.000000"}, "health": "HEALTH_WARN", "health_detail": "low disk space"}]}]}, "summary": [ { "severity": "HEALTH_WARN", "summary": "too few pgs per osd (8 < min 10)"}, { "severity": "HEALTH_WARN", "summary": "mon.a low disk space"}], "timechecks": { "epoch": 2, "round": 0, "round_status": "finished"}, "overall_status": "HEALTH_WARN", "detail": []} test_mon_misc: 453: ceph health detail --format xml-pretty *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** a 194219332 154399648 39819684 20 2014-10-08 11:20:00.922794 9438454 888 9371648 65918 0.000000 HEALTH_WARN low disk space HEALTH_WARN too few pgs per osd (8 < min 10) HEALTH_WARN mon.a low disk space 2 0 finished HEALTH_WARN too few pgs per osd (8 < min 10) mon.a low disk space -- 20% avail test_mon_misc: 456: wpid=6604 test_mon_misc: 455: ceph -w test_mon_misc: 457: date test_mon_misc: 457: mymsg='this is a test log message 30851.Wed Oct 8 11:20:19 UTC 2014' test_mon_misc: 458: ceph log 'this is a test log message 30851.Wed Oct 8 11:20:19 UTC 2014' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_misc: 459: sleep 3 test_mon_misc: 460: grep 'this is a test log message 30851.Wed Oct 8 11:20:19 UTC 2014' /tmp/cephtool30851/30851 2014-10-08 11:20:20.312828 client.4304 [INF] this is a test log message 30851.Wed Oct 8 11:20:19 UTC 2014 2014-10-08 11:20:20.313329 mon.0 [INF] from='client.? 127.0.0.1:0/1006606' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["this is a test log message 30851.Wed Oct 8 11:20:19 UTC 2014"]}]: dispatch 2014-10-08 11:20:20.367625 mon.0 [INF] from='client.? 127.0.0.1:0/1006606' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["this is a test log message 30851.Wed Oct 8 11:20:19 UTC 2014"]}]': finished test_mon_misc: 465: kill 6604 : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** ../qa/workunits/cephtool/test.sh: line 9: 6604 Terminated ceph -w > $TMPDIR/$$ : 1340: test_mon_mds test_mon_mds: 571: remove_all_fs remove_all_fs: 509: ceph fs ls --format=json remove_all_fs: 509: python -c 'import json; import sys; print '\'' '\''.join([fs['\''name'\''] for fs in json.load(sys.stdin)])' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** remove_all_fs: 509: existing_fs=cephfs remove_all_fs: 510: '[' -n cephfs ']' remove_all_fs: 511: fail_all_mds fail_all_mds: 494: ceph mds cluster_down *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked mdsmap DOWN fail_all_mds: 495: get_mds_gids get_mds_gids: 489: ceph mds dump --format=json get_mds_gids: 489: python -c 'import json; import sys; print '\'' '\''.join([m['\''gid'\''].__str__() for m in json.load(sys.stdin)['\''info'\''].values()])' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 7 fail_all_mds: 495: mds_gids=4113 fail_all_mds: 496: for mds_gid in '$mds_gids' fail_all_mds: 497: ceph mds fail 4113 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** failed mds gid 4113 fail_all_mds: 499: check_mds_active check_mds_active: 471: ceph mds dump check_mds_active: 471: grep active *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 8 remove_all_fs: 512: echo 'Removing existing filesystem '\''cephfs'\''...' Removing existing filesystem 'cephfs'... remove_all_fs: 513: ceph fs rm cephfs --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** remove_all_fs: 514: echo 'Removed '\''cephfs'\''.' Removed 'cephfs'. test_mon_mds: 573: ceph osd pool create fs_data 10 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'fs_data' created test_mon_mds: 574: ceph osd pool create fs_metadata 10 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'fs_metadata' created test_mon_mds: 575: ceph fs new cephfs fs_metadata fs_data *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** new fs with metadata pool 17 and data pool 16 test_mon_mds: 577: ceph mds cluster_down *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked mdsmap DOWN test_mon_mds: 578: ceph mds cluster_up *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unmarked mdsmap DOWN test_mon_mds: 580: ceph mds compat rm_incompat 4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** removing incompat feature 4 test_mon_mds: 581: ceph mds compat rm_incompat 4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** incompat feature 4 not present in compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table} test_mon_mds: 585: fail_all_mds fail_all_mds: 494: ceph mds cluster_down *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked mdsmap DOWN fail_all_mds: 495: get_mds_gids get_mds_gids: 489: ceph mds dump --format=json get_mds_gids: 489: python -c 'import json; import sys; print '\'' '\''.join([m['\''gid'\''].__str__() for m in json.load(sys.stdin)['\''info'\''].values()])' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 15 fail_all_mds: 495: mds_gids= fail_all_mds: 499: check_mds_active check_mds_active: 471: ceph mds dump check_mds_active: 471: grep active *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 15 test_mon_mds: 588: ceph osd dump test_mon_mds: 588: grep fs_data *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 589: check_response 'crash_replay_interval 45 ' check_response: 108: expected_stderr_string='crash_replay_interval 45 ' check_response: 109: retcode= check_response: 110: expected_retcode= check_response: 111: '[' '' -a '!=' ']' check_response: 116: grep 'crash_replay_interval 45 ' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 591: ceph mds compat show *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table} test_mon_mds: 592: expect_false ceph mds deactivate 2 expect_false: 45: set -x expect_false: 46: ceph mds deactivate 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EEXIST: mds.2 not active (down:dne) expect_false: 46: return 0 test_mon_mds: 593: ceph mds dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 15 epoch 15 flags 1 created 2014-10-08 11:20:26.899556 modified 2014-10-08 11:20:28.934006 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table} max_mds 1 in up {} failed stopped data_pools 16 metadata_pool 17 inline_data disabled test_mon_mds: 595: mdsmapfile=/tmp/cephtool30851/mdsmap.30851 test_mon_mds: 596: ceph mds getmap -o /tmp/cephtool30851/mdsmap.30851 --no-log-to-stderr test_mon_mds: 596: grep epoch test_mon_mds: 596: sed 's/.*epoch //' test_mon_mds: 596: current_epoch=15 test_mon_mds: 597: '[' -s /tmp/cephtool30851/mdsmap.30851 ']' test_mon_mds: 598: (( epoch = current_epoch + 1 )) test_mon_mds: 599: ceph mds setmap -i /tmp/cephtool30851/mdsmap.30851 16 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set mds map test_mon_mds: 600: rm /tmp/cephtool30851/mdsmap.30851 test_mon_mds: 602: ceph osd pool create data2 10 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'data2' created test_mon_mds: 603: ceph osd pool create data3 10 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'data3' created test_mon_mds: 604: ceph osd dump test_mon_mds: 604: grep 'pool.*data2' test_mon_mds: 604: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 604: data2_pool=18 test_mon_mds: 605: ceph osd dump test_mon_mds: 605: grep 'pool.*data3' test_mon_mds: 605: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 605: data3_pool=19 test_mon_mds: 606: ceph mds add_data_pool 18 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added data pool 18 to mdsmap test_mon_mds: 607: ceph mds add_data_pool 19 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** added data pool 19 to mdsmap test_mon_mds: 608: ceph mds remove_data_pool 18 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** removed data pool 18 from mdsmap test_mon_mds: 609: ceph mds remove_data_pool 19 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** removed data pool 19 from mdsmap test_mon_mds: 610: ceph osd pool delete data2 data2 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'data2' removed test_mon_mds: 611: ceph osd pool delete data3 data3 --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'data3' removed test_mon_mds: 612: ceph mds set_max_mds 4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** max_mds = 4 test_mon_mds: 613: ceph mds set_max_mds 3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** max_mds = 3 test_mon_mds: 614: ceph mds set max_mds 4 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 615: expect_false ceph mds set max_mds asdf expect_false: 45: set -x expect_false: 46: ceph mds set max_mds asdf *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: expect_false: 46: return 0 test_mon_mds: 616: expect_false ceph mds set inline_data true expect_false: 45: set -x expect_false: 46: ceph mds set inline_data true *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EPERM: inline data is new and experimental; you must specify --yes-i-really-mean-it expect_false: 46: return 0 test_mon_mds: 617: ceph mds set inline_data true --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** inline data enabled test_mon_mds: 618: ceph mds set inline_data yes --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** inline data enabled test_mon_mds: 619: ceph mds set inline_data 1 --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** inline data enabled test_mon_mds: 620: expect_false ceph mds set inline_data --yes-i-really-mean-it expect_false: 45: set -x expect_false: 46: ceph mds set inline_data --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: value must be false|no|0 or true|yes|1 expect_false: 46: return 0 test_mon_mds: 621: ceph mds set inline_data false *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** inline data disabled test_mon_mds: 622: ceph mds set inline_data no *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** inline data disabled test_mon_mds: 623: ceph mds set inline_data 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** inline data disabled test_mon_mds: 624: expect_false ceph mds set inline_data asdf expect_false: 45: set -x expect_false: 46: ceph mds set inline_data asdf *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: value must be false|no|0 or true|yes|1 expect_false: 46: return 0 test_mon_mds: 625: ceph mds set max_file_size 1048576 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 626: expect_false ceph mds set max_file_size 123asdf expect_false: 45: set -x expect_false: 46: ceph mds set max_file_size 123asdf *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: max_file_size requires an integer value expect_false: 46: return 0 test_mon_mds: 628: expect_false ceph mds set allow_new_snaps expect_false: 45: set -x expect_false: 46: ceph mds set allow_new_snaps *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Invalid command: missing required parameter val() mds set max_mds|max_file_size|allow_new_snaps|inline_data {} : set mds parameter to Error EINVAL: invalid command expect_false: 46: return 0 test_mon_mds: 629: expect_false ceph mds set allow_new_snaps true expect_false: 45: set -x expect_false: 46: ceph mds set allow_new_snaps true *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EPERM: Snapshots are unstable and will probably break your FS! Set to --yes-i-really-mean-it if you are sure you want to enable them expect_false: 46: return 0 test_mon_mds: 630: ceph mds set allow_new_snaps true --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** enabled new snapshots test_mon_mds: 631: ceph mds set allow_new_snaps 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** disabled new snapshots test_mon_mds: 632: ceph mds set allow_new_snaps false *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** disabled new snapshots test_mon_mds: 633: ceph mds set allow_new_snaps no *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** disabled new snapshots test_mon_mds: 634: expect_false ceph mds set allow_new_snaps taco expect_false: 45: set -x expect_false: 46: ceph mds set allow_new_snaps taco *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error EINVAL: value must be true|yes|1 or false|no|0 expect_false: 46: return 0 test_mon_mds: 638: ceph osd pool create mds-ec-pool 10 10 erasure *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'mds-ec-pool' created test_mon_mds: 639: set +e test_mon_mds: 640: ceph mds add_data_pool mds-ec-pool test_mon_mds: 641: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 642: set -e test_mon_mds: 643: ceph osd dump test_mon_mds: 643: grep 'pool.* '\''mds-ec-pool' test_mon_mds: 643: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 643: ec_poolnum=20 test_mon_mds: 644: ceph osd dump test_mon_mds: 644: grep 'pool.* '\''fs_data' test_mon_mds: 644: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 644: data_poolnum=16 test_mon_mds: 645: ceph osd dump test_mon_mds: 645: grep 'pool.* '\''fs_metadata' test_mon_mds: 645: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 645: metadata_poolnum=17 test_mon_mds: 647: fail_all_mds fail_all_mds: 494: ceph mds cluster_down *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** mdsmap already marked DOWN fail_all_mds: 495: get_mds_gids get_mds_gids: 489: ceph mds dump --format=json get_mds_gids: 489: python -c 'import json; import sys; print '\'' '\''.join([m['\''gid'\''].__str__() for m in json.load(sys.stdin)['\''info'\''].values()])' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 35 fail_all_mds: 495: mds_gids= fail_all_mds: 499: check_mds_active check_mds_active: 471: ceph mds dump check_mds_active: 471: grep active *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 35 test_mon_mds: 648: ceph fs rm cephfs --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 650: set +e test_mon_mds: 651: ceph mds newfs 17 20 --yes-i-really-mean-it test_mon_mds: 652: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 653: ceph mds newfs 20 16 --yes-i-really-mean-it test_mon_mds: 654: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 655: ceph mds newfs 20 20 --yes-i-really-mean-it test_mon_mds: 656: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 657: ceph fs new cephfs fs_metadata mds-ec-pool test_mon_mds: 658: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 659: ceph fs new cephfs mds-ec-pool fs_data test_mon_mds: 660: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 661: ceph fs new cephfs mds-ec-pool mds-ec-pool test_mon_mds: 662: check_response erasure-code 22 22 check_response: 108: expected_stderr_string=erasure-code check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep erasure-code /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 663: set -e test_mon_mds: 667: ceph osd pool create mds-tier 2 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'mds-tier' created test_mon_mds: 668: ceph osd tier add mds-ec-pool mds-tier *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'mds-tier' is now (or already was) a tier of 'mds-ec-pool' test_mon_mds: 669: ceph osd tier set-overlay mds-ec-pool mds-tier *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** overlay for 'mds-ec-pool' is now (or already was) 'mds-tier' test_mon_mds: 670: ceph osd tier cache-mode mds-tier writeback *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set cache-mode for pool 'mds-tier' to writeback test_mon_mds: 671: ceph osd dump test_mon_mds: 671: grep 'pool.* '\''mds-tier' test_mon_mds: 671: awk '{print $2;}' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 671: tier_poolnum=21 test_mon_mds: 673: set -e test_mon_mds: 674: ceph fs new cephfs fs_metadata mds-ec-pool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** new fs with metadata pool 17 and data pool 20 test_mon_mds: 678: set +e test_mon_mds: 679: ceph osd tier remove-overlay mds-ec-pool test_mon_mds: 680: check_response 'in use by CephFS' 16 16 check_response: 108: expected_stderr_string='in use by CephFS' check_response: 109: retcode=16 check_response: 110: expected_retcode=16 check_response: 111: '[' 16 -a 16 '!=' 16 ']' check_response: 116: grep 'in use by CephFS' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 681: ceph osd tier remove mds-ec-pool mds-tier test_mon_mds: 682: check_response 'in use by CephFS' 16 16 check_response: 108: expected_stderr_string='in use by CephFS' check_response: 109: retcode=16 check_response: 110: expected_retcode=16 check_response: 111: '[' 16 -a 16 '!=' 16 ']' check_response: 116: grep 'in use by CephFS' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 683: set -e test_mon_mds: 685: fail_all_mds fail_all_mds: 494: ceph mds cluster_down *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked mdsmap DOWN fail_all_mds: 495: get_mds_gids get_mds_gids: 489: ceph mds dump --format=json get_mds_gids: 489: python -c 'import json; import sys; print '\'' '\''.join([m['\''gid'\''].__str__() for m in json.load(sys.stdin)['\''info'\''].values()])' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 38 fail_all_mds: 495: mds_gids= fail_all_mds: 499: check_mds_active check_mds_active: 471: ceph mds dump check_mds_active: 471: grep active *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 38 test_mon_mds: 686: ceph fs rm cephfs --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 689: set +e test_mon_mds: 690: ceph mds newfs 17 21 --yes-i-really-mean-it test_mon_mds: 691: check_response 'in use as a cache tier' 22 22 check_response: 108: expected_stderr_string='in use as a cache tier' check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep 'in use as a cache tier' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 692: ceph mds newfs 21 16 --yes-i-really-mean-it test_mon_mds: 693: check_response 'in use as a cache tier' 22 22 check_response: 108: expected_stderr_string='in use as a cache tier' check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep 'in use as a cache tier' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 694: ceph mds newfs 21 21 --yes-i-really-mean-it test_mon_mds: 695: check_response 'in use as a cache tier' 22 22 check_response: 108: expected_stderr_string='in use as a cache tier' check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep 'in use as a cache tier' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 696: ceph fs new cephfs fs_metadata mds-tier test_mon_mds: 697: check_response 'in use as a cache tier' 22 22 check_response: 108: expected_stderr_string='in use as a cache tier' check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep 'in use as a cache tier' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 698: ceph fs new cephfs mds-tier fs_data test_mon_mds: 699: check_response 'in use as a cache tier' 22 22 check_response: 108: expected_stderr_string='in use as a cache tier' check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep 'in use as a cache tier' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 700: ceph fs new cephfs mds-tier mds-tier test_mon_mds: 701: check_response 'in use as a cache tier' 22 22 check_response: 108: expected_stderr_string='in use as a cache tier' check_response: 109: retcode=22 check_response: 110: expected_retcode=22 check_response: 111: '[' 22 -a 22 '!=' 22 ']' check_response: 116: grep 'in use as a cache tier' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 702: set -e test_mon_mds: 705: ceph osd tier remove-overlay mds-ec-pool *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** there is now (or already was) no overlay for 'mds-ec-pool' test_mon_mds: 706: ceph osd tier remove mds-ec-pool mds-tier *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'mds-tier' is now (or already was) not a tier of 'mds-ec-pool' test_mon_mds: 709: ceph fs new cephfs fs_metadata mds-tier *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** new fs with metadata pool 17 and data pool 21 test_mon_mds: 713: set +e test_mon_mds: 714: ceph osd tier add mds-ec-pool mds-tier test_mon_mds: 715: check_response 'in use by CephFS' 16 16 check_response: 108: expected_stderr_string='in use by CephFS' check_response: 109: retcode=16 check_response: 110: expected_retcode=16 check_response: 111: '[' 16 -a 16 '!=' 16 ']' check_response: 116: grep 'in use by CephFS' /tmp/cephtool30851/test_invalid.30851 test_mon_mds: 716: set -e test_mon_mds: 718: fail_all_mds fail_all_mds: 494: ceph mds cluster_down *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked mdsmap DOWN fail_all_mds: 495: get_mds_gids get_mds_gids: 489: ceph mds dump --format=json get_mds_gids: 489: python -c 'import json; import sys; print '\'' '\''.join([m['\''gid'\''].__str__() for m in json.load(sys.stdin)['\''info'\''].values()])' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 41 fail_all_mds: 495: mds_gids= fail_all_mds: 499: check_mds_active check_mds_active: 471: ceph mds dump check_mds_active: 471: grep active *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped mdsmap epoch 41 test_mon_mds: 719: ceph fs rm cephfs --yes-i-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_mds: 722: ceph osd pool delete mds-tier mds-tier --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'mds-tier' removed test_mon_mds: 723: ceph osd pool delete mds-ec-pool mds-ec-pool --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'mds-ec-pool' removed test_mon_mds: 725: ceph mds stat *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** e42: 0/0/0 up test_mon_mds: 732: ceph osd pool delete fs_data fs_data --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'fs_data' removed test_mon_mds: 733: ceph osd pool delete fs_metadata fs_metadata --yes-i-really-really-mean-it *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'fs_metadata' removed : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_mon_mon test_mon_mon: 739: ceph mon dump *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** dumped monmap epoch 1 epoch 1 fsid e12b5778-21f1-4b32-9b0e-f5f1d48eeafe last_changed 2014-10-08 11:19:00.829984 created 2014-10-08 11:19:00.829984 0: 127.0.0.1:6789/0 mon.a test_mon_mon: 740: ceph mon getmap -o /tmp/cephtool30851/monmap.30851 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got monmap epoch 1 test_mon_mon: 741: '[' -s /tmp/cephtool30851/monmap.30851 ']' test_mon_mon: 743: ceph mon_status *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** {"name":"a","rank":0,"state":"leader","election_epoch":2,"quorum":[0],"outside_quorum":[],"extra_probe_peers":[],"sync_provider":[],"monmap":{"epoch":1,"fsid":"e12b5778-21f1-4b32-9b0e-f5f1d48eeafe","modified":"2014-10-08 11:19:00.829984","created":"2014-10-08 11:19:00.829984","mons":[{"rank":0,"name":"a","addr":"127.0.0.1:6789\/0"}]}} : 1341: set +x *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** : 1340: test_mon_osd test_mon_osd: 751: bl=192.168.0.1:0/1000 test_mon_osd: 752: ceph osd blacklist add 192.168.0.1:0/1000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** blacklisting 192.168.0.1:0/1000 until 2014-10-08 12:21:01.429977 (3600 sec) test_mon_osd: 753: ceph osd blacklist ls test_mon_osd: 753: grep 192.168.0.1:0/1000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** listed 2 entries 192.168.0.1:0/1000 2014-10-08 12:21:01.429977 test_mon_osd: 754: ceph osd blacklist rm 192.168.0.1:0/1000 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** un-blacklisting 192.168.0.1:0/1000 test_mon_osd: 755: expect_false 'ceph osd blacklist ls | grep 192.168.0.1:0/1000' expect_false: 45: set -x expect_false: 46: 'ceph osd blacklist ls | grep 192.168.0.1:0/1000' ../qa/workunits/cephtool/test.sh: line 46: ceph osd blacklist ls | grep 192.168.0.1:0/1000: No such file or directory expect_false: 46: return 0 test_mon_osd: 757: bl=192.168.0.1 test_mon_osd: 759: ceph osd blacklist add 192.168.0.1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** blacklisting 192.168.0.1:0/0 until 2014-10-08 12:21:02.539453 (3600 sec) test_mon_osd: 760: ceph osd blacklist ls test_mon_osd: 760: grep 192.168.0.1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** listed 2 entries 192.168.0.1:0/0 2014-10-08 12:21:02.539453 test_mon_osd: 761: ceph osd blacklist rm 192.168.0.1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** un-blacklisting 192.168.0.1:0/0 test_mon_osd: 762: expect_false 'ceph osd blacklist ls | grep 192.168.0.1' expect_false: 45: set -x expect_false: 46: 'ceph osd blacklist ls | grep 192.168.0.1' ../qa/workunits/cephtool/test.sh: line 46: ceph osd blacklist ls | grep 192.168.0.1: command not found expect_false: 46: return 0 test_mon_osd: 763: expect_false 'ceph osd blacklist 192.168.0.1/-1' expect_false: 45: set -x expect_false: 46: 'ceph osd blacklist 192.168.0.1/-1' ../qa/workunits/cephtool/test.sh: line 46: ceph osd blacklist 192.168.0.1/-1: No such file or directory expect_false: 46: return 0 test_mon_osd: 764: expect_false 'ceph osd blacklist 192.168.0.1/foo' expect_false: 45: set -x expect_false: 46: 'ceph osd blacklist 192.168.0.1/foo' ../qa/workunits/cephtool/test.sh: line 46: ceph osd blacklist 192.168.0.1/foo: No such file or directory expect_false: 46: return 0 test_mon_osd: 769: ceph osd crush tunables legacy *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** adjusted tunables profile to legacy test_mon_osd: 770: ceph osd crush show-tunables test_mon_osd: 770: grep argonaut *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "profile": "argonaut", test_mon_osd: 771: ceph osd crush tunables bobtail *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** adjusted tunables profile to bobtail test_mon_osd: 772: ceph osd crush show-tunables test_mon_osd: 772: grep bobtail *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "profile": "bobtail", test_mon_osd: 773: ceph osd crush tunables firefly *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** adjusted tunables profile to firefly test_mon_osd: 774: ceph osd crush show-tunables test_mon_osd: 774: grep firefly *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "profile": "firefly", test_mon_osd: 780: ceph osd scrub 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 instructed to scrub test_mon_osd: 781: ceph osd deep-scrub 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 instructed to deep-scrub test_mon_osd: 782: ceph osd repair 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 instructed to repair test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set noup *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set noup test_mon_osd: 787: ceph osd unset noup *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset noup test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set nodown *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set nodown test_mon_osd: 787: ceph osd unset nodown *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset nodown test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set noin *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set noin test_mon_osd: 787: ceph osd unset noin *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset noin test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set noout *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set noout test_mon_osd: 787: ceph osd unset noout *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset noout test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set noscrub *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set noscrub test_mon_osd: 787: ceph osd unset noscrub *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset noscrub test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set nodeep-scrub *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set nodeep-scrub test_mon_osd: 787: ceph osd unset nodeep-scrub *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset nodeep-scrub test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set nobackfill *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set nobackfill test_mon_osd: 787: ceph osd unset nobackfill *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset nobackfill test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set norecover *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set norecover test_mon_osd: 787: ceph osd unset norecover *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset norecover test_mon_osd: 784: for f in noup nodown noin noout noscrub nodeep-scrub nobackfill norecover notieragent test_mon_osd: 786: ceph osd set notieragent *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set notieragent test_mon_osd: 787: ceph osd unset notieragent *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset notieragent test_mon_osd: 789: expect_false ceph osd set bogus expect_false: 45: set -x expect_false: 46: ceph osd set bogus *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Invalid command: bogus not in pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : set Error EINVAL: invalid command expect_false: 46: return 0 test_mon_osd: 790: expect_false ceph osd unset bogus expect_false: 45: set -x expect_false: 46: ceph osd unset bogus *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Invalid command: bogus not in pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : unset Error EINVAL: invalid command expect_false: 46: return 0 test_mon_osd: 792: ceph osd set noup *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set noup test_mon_osd: 793: ceph osd down 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked down osd.0. test_mon_osd: 794: ceph osd dump test_mon_osd: 794: grep 'osd.0 down' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 down in weight 1 up_from 4 up_thru 120 down_at 158 last_clean_interval [0,0) 127.0.0.1:6800/30001 127.0.0.1:6801/30001 127.0.0.1:6802/30001 127.0.0.1:6803/30001 exists 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 test_mon_osd: 795: ceph osd unset noup *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** unset noup test_mon_osd: 796: (( i=0 )) test_mon_osd: 796: (( i < 100 )) test_mon_osd: 797: ceph osd dump test_mon_osd: 797: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_osd: 798: echo 'waiting for osd.0 to come back up' waiting for osd.0 to come back up test_mon_osd: 799: sleep 10 test_mon_osd: 796: (( i++ )) test_mon_osd: 796: (( i < 100 )) test_mon_osd: 797: ceph osd dump test_mon_osd: 797: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 161 up_thru 161 down_at 158 last_clean_interval [4,160) 127.0.0.1:6800/30001 127.0.0.1:6814/1030001 127.0.0.1:6815/1030001 127.0.0.1:6816/1030001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 test_mon_osd: 801: break test_mon_osd: 804: ceph osd dump test_mon_osd: 804: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 161 up_thru 161 down_at 158 last_clean_interval [4,160) 127.0.0.1:6800/30001 127.0.0.1:6814/1030001 127.0.0.1:6815/1030001 127.0.0.1:6816/1030001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 test_mon_osd: 806: ceph osd thrash 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** will thrash map for 0 epochs test_mon_osd: 808: ceph osd dump test_mon_osd: 808: grep 'osd.0 up' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 161 up_thru 161 down_at 158 last_clean_interval [4,160) 127.0.0.1:6800/30001 127.0.0.1:6814/1030001 127.0.0.1:6815/1030001 127.0.0.1:6816/1030001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 test_mon_osd: 809: ceph osd find 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "osd": 1, "ip": "127.0.0.1:6804\/30226", "crush_location": { "host": "gitbuilder-ceph-tarball-precise-amd64-basic", "root": "default"}} test_mon_osd: 810: ceph --format plain osd find 1 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "osd": 1, "ip": "127.0.0.1:6804\/30226", "crush_location": { "host": "gitbuilder-ceph-tarball-precise-amd64-basic", "root": "default"}} test_mon_osd: 811: ceph osd metadata 1 test_mon_osd: 811: grep distro *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "distro": "Ubuntu", "distro_codename": "precise", "distro_description": "Ubuntu 12.04.2 LTS", "distro_version": "12.04", test_mon_osd: 812: ceph --format plain osd metadata 1 test_mon_osd: 812: grep distro *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** "distro": "Ubuntu", "distro_codename": "precise", "distro_description": "Ubuntu 12.04.2 LTS", "distro_version": "12.04", test_mon_osd: 813: ceph osd out 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked out osd.0. test_mon_osd: 814: ceph osd dump test_mon_osd: 814: grep 'osd.0.*out' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up out weight 0 up_from 161 up_thru 161 down_at 158 last_clean_interval [4,160) 127.0.0.1:6800/30001 127.0.0.1:6814/1030001 127.0.0.1:6815/1030001 127.0.0.1:6816/1030001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 test_mon_osd: 815: ceph osd in 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** marked in osd.0. test_mon_osd: 816: ceph osd dump test_mon_osd: 816: grep 'osd.0.*in' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 161 up_thru 161 down_at 158 last_clean_interval [4,160) 127.0.0.1:6800/30001 127.0.0.1:6814/1030001 127.0.0.1:6815/1030001 127.0.0.1:6816/1030001 exists,up 6528f0a4-dabf-46fa-ab4c-36a7e41f0742 test_mon_osd: 817: ceph osd find 0 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "osd": 0, "ip": "127.0.0.1:6800\/30001", "crush_location": { "host": "gitbuilder-ceph-tarball-precise-amd64-basic", "root": "default"}} test_mon_osd: 819: f=/tmp/cephtool30851/map.30851 test_mon_osd: 820: ceph osd getcrushmap -o /tmp/cephtool30851/map.30851 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got crush map from osdmap epoch 165 test_mon_osd: 821: '[' -s /tmp/cephtool30851/map.30851 ']' test_mon_osd: 822: rm /tmp/cephtool30851/map.30851 test_mon_osd: 823: ceph osd getmap -o /tmp/cephtool30851/map.30851 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** got osdmap epoch 165 test_mon_osd: 824: '[' -s /tmp/cephtool30851/map.30851 ']' test_mon_osd: 825: rm /tmp/cephtool30851/map.30851 test_mon_osd: 826: ceph osd getmaxosd test_mon_osd: 826: sed -e 's/max_osd = //' -e 's/ in epoch.*//' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_osd: 826: save=3 test_mon_osd: 827: ceph osd setmaxosd 10 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set new max_osd = 10 test_mon_osd: 828: ceph osd getmaxosd test_mon_osd: 828: grep 'max_osd = 10' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** max_osd = 10 in epoch 167 test_mon_osd: 829: ceph osd setmaxosd 3 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** set new max_osd = 3 test_mon_osd: 830: ceph osd getmaxosd test_mon_osd: 830: grep 'max_osd = 3' *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** max_osd = 3 in epoch 168 test_mon_osd: 747: ceph osd ls *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** test_mon_osd: 832: for id in '`ceph osd ls`' test_mon_osd: 833: retry_eagain 5 map_enxio_to_eagain ceph tell osd.0 version retry_eagain: 63: local max=5 retry_eagain: 64: shift retry_eagain: 65: local status retry_eagain: 66: local tmpfile=/tmp/cephtool30851/retry_eagain.30851 retry_eagain: 67: local count retry_eagain: 62: seq 1 5 retry_eagain: 68: for count in '$(seq 1 $max)' retry_eagain: 69: status=0 retry_eagain: 70: map_enxio_to_eagain ceph tell osd.0 version retry_eagain: 71: test 0 = 0 retry_eagain: 73: break retry_eagain: 77: test 1 = 5 retry_eagain: 80: cat /tmp/cephtool30851/retry_eagain.30851 map_enxio_to_eagain: 93: local status=0 map_enxio_to_eagain: 94: local tmpfile=/tmp/cephtool30851/map_enxio_to_eagain.30851 map_enxio_to_eagain: 96: ceph tell osd.0 version map_enxio_to_eagain: 97: test 0 '!=' 0 map_enxio_to_eagain: 101: cat /tmp/cephtool30851/map_enxio_to_eagain.30851 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** { "version": "ceph version 0.86-267-ge27cf41 (e27cf4139fbe895ef4d1817365275e6a50d603d8)"} map_enxio_to_eagain: 102: rm /tmp/cephtool30851/map_enxio_to_eagain.30851 map_enxio_to_eagain: 103: return 0 retry_eagain: 81: rm /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 82: return 0 test_mon_osd: 832: for id in '`ceph osd ls`' test_mon_osd: 833: retry_eagain 5 map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 63: local max=5 retry_eagain: 64: shift retry_eagain: 65: local status retry_eagain: 66: local tmpfile=/tmp/cephtool30851/retry_eagain.30851 retry_eagain: 67: local count retry_eagain: 62: seq 1 5 retry_eagain: 68: for count in '$(seq 1 $max)' retry_eagain: 69: status=0 retry_eagain: 70: map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 70: status=6 retry_eagain: 71: test 6 = 0 retry_eagain: 72: grep --quiet EAGAIN /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 75: sleep 1 retry_eagain: 68: for count in '$(seq 1 $max)' retry_eagain: 69: status=0 retry_eagain: 70: map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 70: status=6 retry_eagain: 71: test 6 = 0 retry_eagain: 72: grep --quiet EAGAIN /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 75: sleep 1 retry_eagain: 68: for count in '$(seq 1 $max)' retry_eagain: 69: status=0 retry_eagain: 70: map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 70: status=6 retry_eagain: 71: test 6 = 0 retry_eagain: 72: grep --quiet EAGAIN /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 75: sleep 1 retry_eagain: 68: for count in '$(seq 1 $max)' retry_eagain: 69: status=0 retry_eagain: 70: map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 70: status=6 retry_eagain: 71: test 6 = 0 retry_eagain: 72: grep --quiet EAGAIN /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 75: sleep 1 retry_eagain: 68: for count in '$(seq 1 $max)' retry_eagain: 69: status=0 retry_eagain: 70: map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 70: status=6 retry_eagain: 71: test 6 = 0 retry_eagain: 72: grep --quiet EAGAIN /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 75: sleep 1 retry_eagain: 77: test 5 = 5 retry_eagain: 78: echo retried with non zero exit status, 5 times: map_enxio_to_eagain ceph tell osd.1 version retried with non zero exit status, 5 times: map_enxio_to_eagain ceph tell osd.1 version retry_eagain: 80: cat /tmp/cephtool30851/retry_eagain.30851 map_enxio_to_eagain: 93: local status=0 map_enxio_to_eagain: 94: local tmpfile=/tmp/cephtool30851/map_enxio_to_eagain.30851 map_enxio_to_eagain: 96: ceph tell osd.1 version map_enxio_to_eagain: 96: status=6 map_enxio_to_eagain: 97: test 6 '!=' 0 map_enxio_to_eagain: 98: grep --quiet ENXIO /tmp/cephtool30851/map_enxio_to_eagain.30851 map_enxio_to_eagain: 99: echo 'EAGAIN added by ../qa/workunits/cephtool/test.sh::map_enxio_to_eagain' map_enxio_to_eagain: 101: cat /tmp/cephtool30851/map_enxio_to_eagain.30851 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** Error ENXIO: problem getting command descriptions from osd.1 EAGAIN added by ../qa/workunits/cephtool/test.sh::map_enxio_to_eagain map_enxio_to_eagain: 102: rm /tmp/cephtool30851/map_enxio_to_eagain.30851 map_enxio_to_eagain: 103: return 6 retry_eagain: 81: rm /tmp/cephtool30851/retry_eagain.30851 retry_eagain: 82: return 6 test_mon_osd: 1: rm -fr /tmp/cephtool30851 ================ STOP ================= FAIL: test/vstart_wrapped_tests.sh Invalid command: missing required parameter entity() auth add { [...]} : add auth info for from input file, or random key if no input is given, and/or any caps specified in the command .Invalid command: missing required parameter entity() auth caps [...] : update caps for from caps specified in the command Invalid command: saw 0 of caps() [...], expected at least 1 auth caps [...] : update caps for from caps specified in the command .Invalid command: missing required parameter entity() auth del : delete all caps for Invalid command: unused arguments: ['toomany'] auth del : delete all caps for .Invalid command: unused arguments: ['toomany'] auth export {} : write keyring for requested entity, or master keyring if none given .Invalid command: missing required parameter entity() auth get : write keyring file with requested key Invalid command: unused arguments: ['toomany'] auth get : write keyring file with requested key .Invalid command: missing required parameter entity() auth get-key : display requested key Invalid command: unused arguments: ['toomany'] auth get-key : display requested key .Invalid command: missing required parameter entity() auth get-or-create { [...]} : add auth info for from input file, or random key if no input given, and/or any caps specified in the command .Invalid command: missing required parameter entity() auth get-or-create-key { [...]} : get, or add, key for from system/caps pairs specified in the command. If key already exists, any given caps must match the existing caps for that key. .Invalid command: unused arguments: ['toomany'] auth import : auth import: read keyring file from -i .Invalid command: unused arguments: ['toomany'] auth list : list authentication state .Invalid command: missing required parameter entity() auth print-key : display requested key Invalid command: unused arguments: ['toomany'] auth print-key : display requested key Invalid command: missing required parameter entity() auth print_key : display requested key Invalid command: unused arguments: ['toomany'] auth print_key : display requested key .Invalid command: missing required parameter key() config-key del : delete Invalid command: unused arguments: ['toomany'] config-key del : delete .Invalid command: missing required parameter key() config-key exists : check for 's existence Invalid command: unused arguments: ['toomany'] config-key exists : check for 's existence .Invalid command: missing required parameter key() config-key get : get Invalid command: unused arguments: ['toomany'] config-key get : get .Invalid command: unused arguments: ['toomany'] config-key list : list keys .Invalid command: missing required parameter key() config-key put {} : put , value Invalid command: unused arguments: ['toomany'] config-key put {} : put , value .Invalid command: unused arguments: ['toomany'] fs ls : list filesystems ..Invalid command: unused arguments: ['toomany'] fs rm {--yes-i-really-mean-it} : disable the named filesystem ..Invalid command: unused arguments: ['toomany'] mds cluster_down : take MDS cluster down .Invalid command: unused arguments: ['toomany'] mds cluster_up : bring MDS cluster up .Invalid command: missing required parameter feature() mds compat rm_compat : remove compatible feature Invalid command: -1 not in range [0L] mds compat rm_compat : remove compatible feature Invalid command: unused arguments: ['1'] mds compat rm_compat : remove compatible feature .Invalid command: missing required parameter show mds compat show : show mds compatibility settings Invalid command: unused arguments: ['toomany'] mds compat show : show mds compatibility settings .Invalid command: missing required parameter who() mds deactivate : stop mds Invalid command: unused arguments: ['toomany'] mds deactivate : stop mds .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] mds dump {} : dump info, optionally from epoch Invalid command: unused arguments: ['1'] mds dump {} : dump info, optionally from epoch .Invalid command: missing required parameter who() mds fail : force mds to status failed Invalid command: unused arguments: ['toomany'] mds fail : force mds to status failed .Invalid command: missing required parameter feature() mds compat rm_incompat : remove incompatible feature Invalid command: -1 not in range [0L] mds compat rm_incompat : remove incompatible feature Invalid command: unused arguments: ['1'] mds compat rm_incompat : remove incompatible feature .Invalid command: invalid not in max_mds|max_file_size|allow_new_snaps|inline_data mds set max_mds|max_file_size|allow_new_snaps|inline_data {} : set mds parameter to .Invalid command: missing required parameter metadata() mds newfs {--yes-i-really-mean-it} : make new filesystem using pools and Invalid command: missing required parameter data() mds newfs {--yes-i-really-mean-it} : make new filesystem using pools and Invalid command: unused arguments: ['toomany'] mds newfs {--yes-i-really-mean-it} : make new filesystem using pools and Invalid command: -1 not in range [0L] mds newfs {--yes-i-really-mean-it} : make new filesystem using pools and Invalid command: -1 not in range [0L] mds newfs {--yes-i-really-mean-it} : make new filesystem using pools and ..Invalid command: missing required parameter gid() mds rm : remove nonactive mds Invalid command: missing required parameter who() mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: unused arguments: ['toomany'] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: unused arguments: ['toomany'] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: unused arguments: ['toomany'] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: -1 not in range [0L] mds rm : remove nonactive mds Invalid command: unused arguments: ['toomany'] mds rm : remove nonactive mds .Invalid command: missing required parameter who() mds rmfailed : remove failed mds Invalid command: -1 not in range [0L] mds rmfailed : remove failed mds Invalid command: unused arguments: ['1'] mds rmfailed : remove failed mds .Invalid command: missing required parameter maxmds() mds set_max_mds : set max MDS index Invalid command: -1 not in range [0L] mds set_max_mds : set max MDS index Invalid command: unused arguments: ['1'] mds set_max_mds : set max MDS index .Invalid command: missing required parameter gid() mds set_state : set mds state of to Invalid command: -1 not in range [0L] mds set_state : set mds state of to Invalid command: -1 not in range [0L, 20L] mds set_state : set mds state of to Invalid command: 21 not in range [0L, 20L] mds set_state : set mds state of to .Invalid command: missing required parameter epoch() mds setmap : set mds map; must supply correct epoch number Invalid command: -1 not in range [0L] mds setmap : set mds map; must supply correct epoch number Invalid command: unused arguments: ['1'] mds setmap : set mds map; must supply correct epoch number .Invalid command: unused arguments: ['toomany'] mds stat : show MDS status .Invalid command: missing required parameter who() mds stop : stop mds Invalid command: unused arguments: ['toomany'] mds stop : stop mds .Invalid command: missing required parameter who() mds tell [...] : send command to particular mds Invalid command: saw 0 of args() [...], expected at least 1 mds tell [...] : send command to particular mds .Invalid command: missing required parameter name() mon add : add new monitor named at Invalid command: missing required parameter addr() mon add : add new monitor named at Invalid command: 400.500.600.700: invalid IPv4 address mon add : add new monitor named at Invalid command: unused arguments: ['toomany'] mon add : add new monitor named at .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] mon dump {} : dump formatted monmap (optionally from epoch) Invalid command: unused arguments: ['1'] mon dump {} : dump formatted monmap (optionally from epoch) .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] mon getmap {} : get monmap Invalid command: unused arguments: ['1'] mon getmap {} : get monmap .Invalid command: missing required parameter name() mon remove : remove monitor named Invalid command: unused arguments: ['toomany'] mon remove : remove monitor named .Invalid command: unused arguments: ['toomany'] mon stat : summarize monitor status ..invalid not valid: invalid not in detail Invalid command: unused arguments: ['invalid'] df {detail} : show cluster free space stats Invalid command: unused arguments: ['toomany'] df {detail} : show cluster free space stats ..invalid not valid: invalid not in detail Invalid command: unused arguments: ['invalid'] health {detail} : show cluster health Invalid command: unused arguments: ['toomany'] health {detail} : show cluster health .Invalid command: missing required parameter heapcmd(dump|start_profiler|stop_profiler|release|stats) heap dump|start_profiler|stop_profiler|release|stats : show heap usage info (available only if compiled with tcmalloc) Invalid command: invalid not in dump|start_profiler|stop_profiler|release|stats heap dump|start_profiler|stop_profiler|release|stats : show heap usage info (available only if compiled with tcmalloc) .Invalid command: saw 0 of injected_args() [...], expected at least 1 injectargs [...] : inject config arguments into monitor .Invalid command: saw 0 of logtext() [...], expected at least 1 log [...] : log supplied text to the monitor log ..Invalid command: missing required parameter quorumcmd(enter|exit) quorum enter|exit : enter or exit quorum Invalid command: invalid not in enter|exit quorum enter|exit : enter or exit quorum Invalid command: unused arguments: ['toomany'] quorum enter|exit : enter or exit quorum .....Invalid command: missing required parameter force sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} : force sync of and clear monitor store Invalid command: unused arguments: ['toomany'] sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} : force sync of and clear monitor store .Invalid command: missing required parameter target() tell [...] : send a command to a specific daemon Invalid command: CephName: no . in invalid tell [...] : send a command to a specific daemon Invalid command: CephName: no . in osd tell [...] : send a command to a specific daemon Invalid command: saw 0 of args() [...], expected at least 1 tell [...] : send a command to a specific daemon Invalid command: CephName: no . in mon tell [...] : send a command to a specific daemon Invalid command: saw 0 of args() [...], expected at least 1 tell [...] : send a command to a specific daemon Invalid command: CephName: no . in client tell [...] : send a command to a specific daemon Invalid command: saw 0 of args() [...], expected at least 1 tell [...] : send a command to a specific daemon Invalid command: CephName: no . in mds tell [...] : send a command to a specific daemon Invalid command: saw 0 of args() [...], expected at least 1 tell [...] : send a command to a specific daemon .Invalid command: invalid not valid IPv6 address osd blacklist add|rm {} : add (optionally until seconds from now) or remove from blacklist -1.0 not valid: -1.0 not in range [0.0] Invalid command: unused arguments: ['-1.0'] osd blacklist add|rm {} : add (optionally until seconds from now) or remove from blacklist Invalid command: unused arguments: ['toomany'] osd blacklist add|rm {} : add (optionally until seconds from now) or remove from blacklist Invalid command: invalid not valid IPv6 address osd blacklist add|rm {} : add (optionally until seconds from now) or remove from blacklist -1.0 not valid: -1.0 not in range [0.0] Invalid command: unused arguments: ['-1.0'] osd blacklist add|rm {} : add (optionally until seconds from now) or remove from blacklist Invalid command: unused arguments: ['toomany'] osd blacklist add|rm {} : add (optionally until seconds from now) or remove from blacklist .Invalid command: missing required parameter ls osd blacklist ls : show blacklisted clients Invalid command: unused arguments: ['toomany'] osd blacklist ls : show blacklisted clients .no valid command found; 10 closest matches: osd pool set size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote {--yes-i-really-mean-it} osd pool set-quota max_objects|max_bytes osd pool rename osd pool get size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote osd pool create {} {replicated|erasure} {} {} {} osd pool delete {} {--yes-i-really-really-mean-it} osd pool rmsnap osd pool ls {detail} osd blacklist add|rm {} osd pool mksnap .invalid not valid: invalid UUID invalid: badly formed hexadecimal UUID string Invalid command: unused arguments: ['invalid'] osd create {} : create new osd (with optional UUID) Invalid command: unused arguments: ['toomany'] osd create {} : create new osd (with optional UUID) .Invalid command: missing required parameter show-tunables osd crush show-tunables : show current crush tunables Invalid command: missing required parameter weight() osd crush add [...] : add or update crushmap position and weight for with and location no valid command found; 10 closest matches: osd crush add-bucket Invalid command: invalid chars ^ in ^^^ osd crush add [...] : add or update crushmap position and weight for with and location .Invalid command: missing required parameter show-tunables osd crush show-tunables : show current crush tunables Invalid command: missing required parameter name() osd crush add-bucket : add no-parent (probably root) crush bucket of type Invalid command: unused arguments: ['toomany'] osd crush add-bucket : add no-parent (probably root) crush bucket of type Invalid command: invalid chars ^ in ^^^ osd crush add-bucket : add no-parent (probably root) crush bucket of type .Invalid command: missing required parameter show-tunables osd crush show-tunables : show current crush tunables Invalid command: missing required parameter weight() osd crush create-or-move [...] : create entry or move existing entry for at/to location Invalid command: -1.0 not in range [0.0] osd crush create-or-move [...] : create entry or move existing entry for at/to location Invalid command: invalid chars ^ in ^^^ osd crush create-or-move [...] : create entry or move existing entry for at/to location .Invalid command: missing required parameter show-tunables osd crush show-tunables : show current crush tunables Invalid command: unused arguments: ['toomany'] osd crush dump : dump crush map .Invalid command: missing required parameter name() osd crush link [...] : link existing entry for under location Invalid command: saw 0 of args() [...], expected at least 1 osd crush link [...] : link existing entry for under location .Invalid command: missing required parameter name() osd crush move [...] : move existing entry for to location Invalid command: saw 0 of args() [...], expected at least 1 osd crush move [...] : move existing entry for to location Invalid command: invalid chars ^ in ^^^ osd crush move [...] : move existing entry for to location Invalid command: invalid chars ^ in ^^^ osd crush move [...] : move existing entry for to location .Invalid command: missing required parameter name() osd crush reweight : change 's weight to in crush map Invalid command: missing required parameter weight() osd crush reweight : change 's weight to in crush map Invalid command: -1.0 not in range [0.0] osd crush reweight : change 's weight to in crush map Invalid command: invalid chars ^ in ^^^ osd crush reweight : change 's weight to in crush map .Invalid command: missing required parameter name() osd crush rm {} : remove from crush map (everywhere, or just at ) Invalid command: unused arguments: ['toomany'] osd crush rm {} : remove from crush map (everywhere, or just at ) Invalid command: missing required parameter name() osd crush remove {} : remove from crush map (everywhere, or just at ) Invalid command: unused arguments: ['toomany'] osd crush remove {} : remove from crush map (everywhere, or just at ) Invalid command: missing required parameter name() osd crush unlink {} : unlink from crush map (everywhere, or just at ) Invalid command: unused arguments: ['toomany'] osd crush unlink {} : unlink from crush map (everywhere, or just at ) .Invalid command: missing required parameter show-tunables osd crush show-tunables : show current crush tunables Invalid command: missing required parameter list osd crush rule list : list crush rules Invalid command: unused arguments: ['toomany'] osd crush rule list : list crush rules Invalid command: unused arguments: ['toomany'] osd crush rule ls : list crush rules .Invalid command: missing required parameter name() osd crush rule create-erasure {} : create crush rule for erasure coded pool created with (default default) Invalid command: invalid chars ^ in ^^^ osd crush rule create-erasure {} : create crush rule for erasure coded pool created with (default default) ^^^ not valid: invalid chars ^ in ^^^ Invalid command: unused arguments: ['^^^'] osd crush rule create-erasure {} : create crush rule for erasure coded pool created with (default default) .Invalid command: missing required parameter name() osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) Invalid command: missing required parameter root() osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) Invalid command: missing required parameter type() osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) Invalid command: invalid chars ^ in ^^^ osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) Invalid command: invalid chars | in ||| osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) Invalid command: invalid chars + in +++ osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) toomany not valid: toomany not in firstn|indep Invalid command: unused arguments: ['toomany'] osd crush rule create-simple {firstn|indep} : create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools) .Invalid command: unused arguments: ['toomany'] osd crush rule dump {} : dump crush rule (default all) .Invalid command: missing required parameter name() osd crush rule rm : remove crush rule Invalid command: invalid chars ^ in ^^^^ osd crush rule rm : remove crush rule Invalid command: unused arguments: ['toomany'] osd crush rule rm : remove crush rule .Invalid command: missing required parameter show-tunables osd crush show-tunables : show current crush tunables Invalid command: missing required parameter weight() osd crush set [...] : update crushmap position and weight for to with location Invalid command: -1.0 not in range [0.0] osd crush set [...] : update crushmap position and weight for to with location Invalid command: invalid chars ^ in ^^^ osd crush set [...] : update crushmap position and weight for to with location .Invalid command: missing required parameter profile(legacy|argonaut|bobtail|firefly|optimal|default) osd crush tunables legacy|argonaut|bobtail|firefly|optimal|default : set crush tunables values to Invalid command: unused arguments: ['toomany'] osd crush tunables legacy|argonaut|bobtail|firefly|optimal|default : set crush tunables values to .Invalid command: missing required parameter who() osd deep-scrub : initiate deep scrub on osd Invalid command: unused arguments: ['toomany'] osd deep-scrub : initiate deep scrub on osd .Invalid command: saw 0 of ids() [...], expected at least 1 osd down [...] : set osd(s) [...] down .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] osd dump {} : print summary of OSD map Invalid command: unused arguments: ['1'] osd dump {} : print summary of OSD map .Invalid command: missing required parameter name() osd erasure-code-profile get : get erasure code profile Invalid command: invalid chars ^ in ^^^^ osd erasure-code-profile get : get erasure code profile .Invalid command: unused arguments: ['toomany'] osd erasure-code-profile ls : list all erasure code profiles .Invalid command: missing required parameter name() osd erasure-code-profile rm : remove erasure code profile Invalid command: invalid chars ^ in ^^^^ osd erasure-code-profile rm : remove erasure code profile .Invalid command: missing required parameter name() osd erasure-code-profile set { [...]} : create erasure code profile with [ ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS) Invalid command: invalid chars ^ in ^^^^ osd erasure-code-profile set { [...]} : create erasure code profile with [ ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS) .Invalid command: missing required parameter id() osd find : find osd in the CRUSH map and show its location Invalid command: -1 not in range [0L] osd find : find osd in the CRUSH map and show its location Invalid command: unused arguments: ['1'] osd find : find osd in the CRUSH map and show its location .Invalid command: unused arguments: ['toomany'] osd getmaxosd : show largest OSD id .Invalid command: saw 0 of ids() [...], expected at least 1 osd in [...] : set osd(s) [...] in .Invalid command: missing required parameter id() osd lost {--yes-i-really-mean-it} : mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL what? not valid: what? not in --yes-i-really-mean-it Invalid command: unused arguments: ['what?'] osd lost {--yes-i-really-mean-it} : mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL Invalid command: -1 not in range [0L] osd lost {--yes-i-really-mean-it} : mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL Invalid command: unused arguments: ['toomany'] osd lost {--yes-i-really-mean-it} : mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL .Invalid command: unused arguments: ['toomany'] osd lspools {} : list pools .Invalid command: missing required parameter pool() osd map : find pg for in Invalid command: missing required parameter object() osd map : find pg for in Invalid command: unused arguments: ['toomany'] osd map : find pg for in .Invalid command: missing required parameter id() osd metadata : fetch metadata for osd Invalid command: -1 not in range [0L] osd metadata : fetch metadata for osd Invalid command: unused arguments: ['1'] osd metadata : fetch metadata for osd .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] osd getcrushmap {} : get CRUSH map Invalid command: unused arguments: ['1'] osd getcrushmap {} : get CRUSH map .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] osd getmap {} : get OSD map Invalid command: unused arguments: ['1'] osd getmap {} : get OSD map .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] osd ls {} : show all OSD ids Invalid command: unused arguments: ['1'] osd ls {} : show all OSD ids .-1 not valid: -1 not in range [0L] Invalid command: unused arguments: ['-1'] osd tree {} : print OSD tree Invalid command: unused arguments: ['1'] osd tree {} : print OSD tree .Invalid command: saw 0 of ids() [...], expected at least 1 osd out [...] : set osd(s) [...] out .Invalid command: unused arguments: ['toomany'] osd pause : pause osd .Invalid command: unused arguments: ['toomany'] osd perf : print dump of OSD perf summary stats .Invalid command: missing required parameter pool() osd pool create {} {replicated|erasure} {} {} {} : create pool Invalid command: missing required parameter pg_num() osd pool create {} {replicated|erasure} {} {} {} : create pool Invalid command: -1 not in range [0L] osd pool create {} {replicated|erasure} {} {} {} : create pool ^^^ not valid: invalid chars ^ in ^^^ ruleset not valid: ruleset doesn't represent an int Invalid command: unused arguments: ['ruleset'] osd pool create {} {replicated|erasure} {} {} {} : create pool toomany not valid: toomany doesn't represent an int Invalid command: unused arguments: ['toomany'] osd pool create {} {replicated|erasure} {} {} {} : create pool INVALID not valid: INVALID not in replicated|erasure ruleset not valid: ruleset doesn't represent an int Invalid command: unused arguments: ['ruleset'] osd pool create {} {replicated|erasure} {} {} {} : create pool .Invalid command: missing required parameter pool() osd pool delete {} {--yes-i-really-really-mean-it} : delete pool not really not valid: not really not in --yes-i-really-really-mean-it Invalid command: unused arguments: ['not really'] osd pool delete {} {--yes-i-really-really-mean-it} : delete pool Invalid command: unused arguments: ['toomany'] osd pool delete {} {--yes-i-really-really-mean-it} : delete pool .Invalid command: missing required parameter ls osd pool ls {detail} : list pools Invalid command: missing required parameter pool() osd pool get size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote : get pool parameter Invalid command: missing required parameter var(size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote) osd pool get size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote : get pool parameter Invalid command: unused arguments: ['toomany'] osd pool get size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote : get pool parameter Invalid command: invalid not in size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote osd pool get size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote : get pool parameter .Invalid command: missing required parameter pool() osd pool mksnap : make snapshot in Invalid command: missing required parameter snap() osd pool mksnap : make snapshot in Invalid command: unused arguments: ['toomany'] osd pool mksnap : make snapshot in .Invalid command: missing required parameter srcpool() osd pool rename : rename to Invalid command: missing required parameter destpool() osd pool rename : rename to Invalid command: unused arguments: ['toomany'] osd pool rename : rename to .Invalid command: missing required parameter pool() osd pool rmsnap : remove snapshot from Invalid command: missing required parameter snap() osd pool rmsnap : remove snapshot from Invalid command: unused arguments: ['toomany'] osd pool rmsnap : remove snapshot from .Invalid command: missing required parameter pool() osd pool set size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote {--yes-i-really-mean-it} : set pool parameter to Invalid command: missing required parameter var(size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote) osd pool set size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote {--yes-i-really-mean-it} : set pool parameter to toomany not valid: toomany not in --yes-i-really-mean-it Invalid command: unused arguments: ['toomany'] osd pool set size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote {--yes-i-really-mean-it} : set pool parameter to .Invalid command: missing required parameter pool() osd pool set-quota max_objects|max_bytes : set object or byte limit on pool Invalid command: missing required parameter field(max_objects|max_bytes) osd pool set-quota max_objects|max_bytes : set object or byte limit on pool Invalid command: missing required parameter val() osd pool set-quota max_objects|max_bytes : set object or byte limit on pool Invalid command: invalid not in max_objects|max_bytes osd pool set-quota max_objects|max_bytes : set object or byte limit on pool Invalid command: unused arguments: ['toomany'] osd pool set-quota max_objects|max_bytes : set object or byte limit on pool .Invalid command: missing required parameter who() osd repair : initiate repair on osd Invalid command: unused arguments: ['toomany'] osd repair : initiate repair on osd .Invalid command: missing required parameter id() osd reweight : reweight osd to 0.0 < < 1.0 Invalid command: missing required parameter weight() osd reweight : reweight osd to 0.0 < < 1.0 Invalid command: 2.0 not in range [0.0, 1.0] osd reweight : reweight osd to 0.0 < < 1.0 Invalid command: -1 not in range [0L] osd reweight : reweight osd to 0.0 < < 1.0 Invalid command: unused arguments: ['toomany'] osd reweight : reweight osd to 0.0 < < 1.0 .50 not valid: 50 not in range [100L] Invalid command: unused arguments: ['50'] osd reweight-by-utilization {} : reweight OSDs by utilization [overload-percentage-for-consideration, default 120] Invalid command: unused arguments: ['toomany'] osd reweight-by-utilization {} : reweight OSDs by utilization [overload-percentage-for-consideration, default 120] .Invalid command: saw 0 of ids() [...], expected at least 1 osd rm [...] : remove osd(s) [...] in .Invalid command: missing required parameter who() osd scrub : initiate scrub on osd Invalid command: unused arguments: ['toomany'] osd scrub : initiate scrub on osd .Invalid command: missing required parameter key(pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent) osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : set Invalid command: invalid not in pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : set Invalid command: unused arguments: ['toomany'] osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : set Invalid command: missing required parameter key(pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent) osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : unset Invalid command: invalid not in pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : unset Invalid command: unused arguments: ['toomany'] osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent : unset .Invalid command: unused arguments: ['toomany'] osd setcrushmap : set crush map from input file .Invalid command: missing required parameter newmax() osd setmaxosd : set new maximum osd value Invalid command: -1 not in range [0L] osd setmaxosd : set new maximum osd value Invalid command: unused arguments: ['1'] osd setmaxosd : set new maximum osd value .Invalid command: unused arguments: ['toomany'] osd stat : print summary of OSD map .Invalid command: missing required parameter num_epochs() osd thrash : thrash OSDs for Invalid command: -1 not in range [0L] osd thrash : thrash OSDs for Invalid command: unused arguments: ['1'] osd thrash : thrash OSDs for .Invalid command: missing required parameter pool() osd tier cache-mode none|writeback|forward|readonly|readforward : specify the caching mode for cache tier Invalid command: missing required parameter mode(none|writeback|forward|readonly|readforward) osd tier cache-mode none|writeback|forward|readonly|readforward : specify the caching mode for cache tier .Invalid command: missing required parameter pool() osd tier add {--force-nonempty} : add the tier (the second one) to base pool (the first one) Invalid command: missing required parameter tierpool() osd tier add {--force-nonempty} : add the tier (the second one) to base pool (the first one) toomany not valid: toomany not in --force-nonempty Invalid command: unused arguments: ['toomany'] osd tier add {--force-nonempty} : add the tier (the second one) to base pool (the first one) Invalid command: missing required parameter pool() osd tier remove : remove the tier (the second one) from base pool (the first one) Invalid command: missing required parameter tierpool() osd tier remove : remove the tier (the second one) from base pool (the first one) Invalid command: unused arguments: ['toomany'] osd tier remove : remove the tier (the second one) from base pool (the first one) Invalid command: missing required parameter pool() osd tier set-overlay : set the overlay pool for base pool to be Invalid command: missing required parameter overlaypool() osd tier set-overlay : set the overlay pool for base pool to be Invalid command: unused arguments: ['toomany'] osd tier set-overlay : set the overlay pool for base pool to be .Invalid command: missing required parameter pool() osd tier remove-overlay : remove the overlay pool for base pool Invalid command: unused arguments: ['toomany'] osd tier remove-overlay : remove the overlay pool for base pool .Invalid command: unused arguments: ['toomany'] osd unpause : unpause osd .Invalid command: missing required parameter debugop(unfound_objects_exist|degraded_pgs_exist) pg debug unfound_objects_exist|degraded_pgs_exist : show debug info about pgs Invalid command: invalid not in unfound_objects_exist|degraded_pgs_exist pg debug unfound_objects_exist|degraded_pgs_exist : show debug info about pgs .Invalid command: missing required parameter pgid() pg deep-scrub : start deep-scrub on Invalid command: pgid has no . pg deep-scrub : start deep-scrub on .invalid not valid: invalid not in all|summary|sum|delta|pools|osds|pgs|pgs_brief Invalid command: unused arguments: ['invalid'] pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} : show human-readable versions of pg map (only 'all' valid with plain) .invalid not valid: invalid not in all|summary|sum|pools|osds|pgs Invalid command: unused arguments: ['invalid'] pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]} : show human-readable version of pg map in json only ..invalid not valid: invalid not in inactive|unclean|stale invalid not valid: invalid doesn't represent an int Invalid command: unused arguments: ['invalid'] pg dump_stuck {inactive|unclean|stale [inactive|unclean|stale...]} {} : show information about stuck pgs 1234 not valid: 1234 not in inactive|unclean|stale .Invalid command: missing required parameter pgid() pg force_create_pg : force creation of pg Invalid command: pgid has no . pg force_create_pg : force creation of pg ..Invalid command: missing required parameter pgid() pg map : show mapping of pg to osds Invalid command: pgid has no . pg map : show mapping of pg to osds .Invalid command: missing required parameter pgid() pg repair : start repair on Invalid command: pgid has no . pg repair : start repair on .Invalid command: missing required parameter pgid() pg scrub : start scrub on Invalid command: pgid has no . pg scrub : start scrub on ..Invalid command: missing required parameter ratio() pg set_full_ratio : set ratio at which pgs are considered full Invalid command: 2.0 not in range [0.0, 1.0] pg set_full_ratio : set ratio at which pgs are considered full .Invalid command: missing required parameter ratio() pg set_nearfull_ratio : set ratio at which pgs are considered nearly full Invalid command: 2.0 not in range [0.0, 1.0] pg set_nearfull_ratio : set ratio at which pgs are considered nearly full ... ---------------------------------------------------------------------- Ran 137 tests in 24.439s OK PASS: test/pybind/test_ceph_argparse.py =========================================== 1 of 83 tests failed Please report to ceph-devel@vger.kernel.org =========================================== make[4]: *** [check-TESTS] Error 1 make[4]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make[3]: *** [check-am] Error 2 make[3]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make[1]: *** [check] Error 2 make[1]: Leaving directory `/srv/autobuild-ceph/gitbuilder.git/build/src' make: *** [check-recursive] Error 1 + exit 5 >>> Result code: 40 FAIL `out/log' -> `out/fail/e27cf4139fbe895ef4d1817365275e6a50d603d8' Done: e27cf4139fbe895ef4d1817365275e6a50d603d8