Bug #15316
closedCan't locate Amazon/S3.pm in @INC (you may need to install the Amazon::S3 module)
0%
Description
2016-03-29T09:43:05.889 DEBUG:teuthology.run:Config: archive_path: /home/teuthworker/archive/teuthology-2016-03-28_19:54:09-rgw:multifs-v0.94.6---basic-plana/286 branch: v0.94.6 description: rgw:multifs/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec.yaml tasks/rgw_multipart_upload.yaml} email: null job_id: '286' last_in_suite: false machine_type: plana name: teuthology-2016-03-28_19:54:09-rgw:multifs-v0.94.6---basic-plana nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: v0.94.6 ceph: conf: client: debug rgw: 20 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 osd sloppy crc: true fs: xfs log-whitelist: - slow request sha1: e832001feaf8c176593e0325c8298e3f16dfb403 ceph-deploy: branch: dev-commit: e832001feaf8c176593e0325c8298e3f16dfb403 conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: e832001feaf8c176593e0325c8298e3f16dfb403 rgw: ec-data-pool: true frontend: apache s3tests: slow_backend: true workunit: sha1: e832001feaf8c176593e0325c8298e3f16dfb403 owner: scheduled_teuthology@scheduler priority: 1000 roles: - - mon.a - mon.c - osd.0 - osd.1 - osd.2 - client.0 - - mon.b - osd.3 - osd.4 - osd.5 - client.1 sha1: e832001feaf8c176593e0325c8298e3f16dfb403 suite: rgw:multifs suite_branch: hammer suite_path: /home/foo2/src/ceph-qa-suite_hammer suite_sha1: 789be166faa196901c92745fe626b282c0ece6a7 tasks: - install: null - ceph: null - rgw: - client.0 - workunit: clients: client.0: - rgw/s3_multipart_upload.pl teuthology_branch: master tube: plana verbose: false worker_log: /home/teuthworker/archive/worker_logs/worker.plana.10905 ...... 016-03-29T09:55:36.020 INFO:teuthology.run_tasks:Running task rgw... 2016-03-29T09:55:36.022 INFO:tasks.rgw:Using apache as radosgw frontend 2016-03-29T09:55:36.023 INFO:tasks.rgw:Creating apache directories... 2016-03-29T09:55:36.023 INFO:teuthology.orchestra.run.plana36:Running: 'mkdir -p /home/ubuntu/cephtest/apache/htdocs.client.0 /home/ubuntu/cephtest/apache/tmp.client.0/fastcgi_sock && mkdir /home/ubuntu/cephtest/archive/apache.client.0' 2016-03-29T09:55:36.036 DEBUG:tasks.rgw:In rgw.configure_regions_and_zones() and regions is None. Bailing 2016-03-29T09:55:36.036 INFO:tasks.rgw:Configuring users... 2016-03-29T09:55:36.037 INFO:tasks.rgw:creating data pools 2016-03-29T09:55:36.037 INFO:teuthology.orchestra.run.plana36:Running: 'ceph osd erasure-code-profile set client.0 k=2 m=1 ruleset-failure-domain=osd' 2016-03-29T09:55:37.057 INFO:teuthology.orchestra.run.plana36:Running: 'ceph osd pool create .rgw.buckets 64 64 erasure client.0' 2016-03-29T09:55:39.551 INFO:teuthology.orchestra.run.plana36.stderr:pool '.rgw.buckets' created 2016-03-29T09:55:39.562 INFO:tasks.rgw:Shipping apache config and rgw.fcgi... 2016-03-29T09:55:39.563 INFO:teuthology.orchestra.run.plana36:Running: 'sudo lsb_release -is' 2016-03-29T09:55:39.630 DEBUG:teuthology.misc:System to be installed: Ubuntu 2016-03-29T09:55:39.631 INFO:teuthology.orchestra.run.plana36:Running: 'python -c \'import shutil, sys; shutil.copyfileobj(sys.stdin, file(sys.argv[1], "wb"))\' /home/ubuntu/cephtest/apache/apache.client.0.conf' 2016-03-29T09:55:39.661 INFO:teuthology.orchestra.run.plana36:Running: 'python -c \'import shutil, sys; shutil.copyfileobj(sys.stdin, file(sys.argv[1], "wb"))\' /home/ubuntu/cephtest/apache/htdocs.client.0/rgw.fcgi' 2016-03-29T09:55:39.795 INFO:teuthology.orchestra.run.plana36:Running: 'chmod a=rx /home/ubuntu/cephtest/apache/htdocs.client.0/rgw.fcgi' 2016-03-29T09:55:39.853 INFO:tasks.rgw:Starting rgw... 2016-03-29T09:55:39.854 INFO:tasks.rgw:rgw client.0 config is {} 2016-03-29T09:55:39.855 INFO:tasks.rgw:client client.0 is id 0 2016-03-29T09:55:39.855 INFO:tasks.rgw.client.0:Restarting daemon 2016-03-29T09:55:39.856 INFO:teuthology.orchestra.run.plana36:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-socket-path /home/ubuntu/cephtest/apache/tmp.client.0/fastcgi_sock/rgw_sock -n client.0 -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.client.0.sock --foreground | sudo tee /var/log/ceph/rgw.client.0.stdout 2>&1' 2016-03-29T09:55:39.945 INFO:tasks.rgw.client.0:Started 2016-03-29T09:55:39.945 INFO:tasks.rgw:Starting apache... 2016-03-29T09:55:39.946 INFO:teuthology.orchestra.run.plana36:Running: 'sudo lsb_release -is' 2016-03-29T09:55:40.022 DEBUG:teuthology.misc:System to be installed: Ubuntu 2016-03-29T09:55:40.022 INFO:teuthology.orchestra.run.plana36:Running: 'adjust-ulimits daemon-helper kill apache2 -X -f /home/ubuntu/cephtest/apache/apache.client.0.conf' 2016-03-29T09:55:40.033 INFO:teuthology.run_tasks:Running task workunit... 2016-03-29T09:55:40.034 INFO:tasks.workunit:Pulling workunits from ref e832001feaf8c176593e0325c8298e3f16dfb403 2016-03-29T09:55:40.034 INFO:tasks.workunit:Making a separate scratch dir for every client... 2016-03-29T09:55:40.035 DEBUG:tasks.workunit:getting remote for 0 role client.0 2016-03-29T09:55:40.035 INFO:teuthology.orchestra.run.plana36:Running: 'stat -- /home/ubuntu/cephtest/mnt.0' 2016-03-29T09:55:40.047 INFO:teuthology.orchestra.run.plana36.stderr:stat: cannot stat ‘/home/ubuntu/cephtest/mnt.0’: No such file or directory 2016-03-29T09:55:40.048 INFO:teuthology.orchestra.run.plana36:Running: 'mkdir -- /home/ubuntu/cephtest/mnt.0' 2016-03-29T09:55:40.139 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2016-03-29T09:55:40.140 INFO:teuthology.orchestra.run.plana36:Running: 'cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0' 2016-03-29T09:55:41.260 INFO:teuthology.orchestra.run.plana36:Running: "cd -- /home/ubuntu/cephtest/workunit.client.0 && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\\0' >/home/ubuntu/cephtest/workunits.list.client.0" 2016-03-29T09:55:41.469 INFO:tasks.workunit.client.0.plana36.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2016-03-29T09:55:41.471 INFO:tasks.workunit.client.0.plana36.stdout:make[1]: Entering directory `/home/ubuntu/cephtest/workunit.client.0/direct_io' 2016-03-29T09:55:41.472 INFO:tasks.workunit.client.0.plana36.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2016-03-29T09:55:48.481 INFO:tasks.workunit.client.0.plana36.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2016-03-29T09:55:48.958 INFO:tasks.workunit.client.0.plana36.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2016-03-29T09:55:49.025 INFO:tasks.workunit.client.0.plana36.stdout:make[1]: Leaving directory `/home/ubuntu/cephtest/workunit.client.0/direct_io' 2016-03-29T09:55:49.027 INFO:tasks.workunit.client.0.plana36.stdout:make[1]: Entering directory `/home/ubuntu/cephtest/workunit.client.0/fs' 2016-03-29T09:55:49.027 INFO:tasks.workunit.client.0.plana36.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2016-03-29T09:55:49.080 INFO:tasks.workunit.client.0.plana36.stdout:make[1]: Leaving directory `/home/ubuntu/cephtest/workunit.client.0/fs' 2016-03-29T09:55:49.180 INFO:tasks.workunit:Running workunits matching rgw/s3_multipart_upload.pl on client.0... 2016-03-29T09:55:49.181 INFO:tasks.workunit:Running workunit rgw/s3_multipart_upload.pl... 2016-03-29T09:55:49.182 INFO:teuthology.orchestra.run.plana36:Running (workunit test rgw/s3_multipart_upload.pl): 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e832001feaf8c176593e0325c8298e3f16dfb403 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl' 2016-03-29T09:55:49.276 INFO:tasks.workunit.client.0.plana36.stderr:Can't locate Amazon/S3.pm in @INC (you may need to install the Amazon::S3 module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl line 30. 2016-03-29T09:55:49.276 INFO:tasks.workunit.client.0.plana36.stderr:BEGIN failed--compilation aborted at /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl line 30. 2016-03-29T09:55:49.277 INFO:tasks.workunit:Stopping ['rgw/s3_multipart_upload.pl'] on client.0... 2016-03-29T09:55:49.277 INFO:teuthology.orchestra.run.plana36:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone' 2016-03-29T09:55:49.307 ERROR:teuthology.parallel:Exception in parallel execution Traceback (most recent call last): File "/home/foo2/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__ for result in self: File "/home/foo2/src/teuthology_master/teuthology/parallel.py", line 101, in next resurrect_traceback(result) File "/home/foo2/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback return func(*args, **kwargs) File "/home/foo2/src/ceph-qa-suite_hammer/tasks/workunit.py", line 385, in _run_tests label="workunit test {workunit}".format(workunit=workunit) File "/home/foo2/src/teuthology_master/teuthology/orchestra/remote.py", line 156, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/foo2/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run r.wait() File "/home/foo2/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait label=self.label) CommandFailedError: Command failed (workunit test rgw/s3_multipart_upload.pl) on plana36 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e832001feaf8c176593e0325c8298e3f16dfb403 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl' 2016-03-29T09:55:49.309 ERROR:teuthology.run_tasks:Saw exception from tasks.
Updated by Loïc Dachary about 8 years ago
- Description updated (diff)
- Status changed from New to 12
Updated by Loïc Dachary about 8 years ago
http://pulpito.ceph.com/loic-2016-04-05_04:41:12-rgw-hammer-backports---basic-smithi/108945/
http://pulpito.ceph.com/loic-2016-04-05_04:41:12-rgw-hammer-backports---basic-smithi/108953/
http://pulpito.ceph.com/loic-2016-04-05_04:41:12-rgw-hammer-backports---basic-smithi/108955/
2016-04-05T16:57:50.263 INFO:tasks.workunit.client.0.smithi019.stdout:make[1]: Leaving directory `/home/ubuntu/cephtest/workunit.client.0/fs' 2016-04-05T16:57:50.336 INFO:tasks.workunit:Running workunits matching rgw/s3_multipart_upload.pl on client.0... 2016-04-05T16:57:50.337 INFO:tasks.workunit:Running workunit rgw/s3_multipart_upload.pl... 2016-04-05T16:57:50.337 INFO:teuthology.orchestra.run.smithi019:Running (workunit test rgw/s3_multipart_upload.pl): 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0eb9d5598ac1dfbca0e679722f2e86f9270d2bc4 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl' 2016-04-05T16:57:50.487 INFO:tasks.workunit.client.0.smithi019.stderr:Can't locate Amazon/S3.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl line 30. 2016-04-05T16:57:50.487 INFO:tasks.workunit.client.0.smithi019.stderr:BEGIN failed--compilation aborted at /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl line 30. 2016-04-05T16:57:50.488 INFO:tasks.workunit:Stopping ['rgw/s3_multipart_upload.pl'] on client.0... 2016-04-05T16:57:50.488 INFO:teuthology.orchestra.run.smithi019:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone'
Updated by Loïc Dachary about 8 years ago
- Subject changed from (rgw:frontend: apache) teuthology wasn't judged apache2 whether to install to Can't locate Amazon/S3.pm in @INC (you may need to install the Amazon::S3 module)
Updated by Loïc Dachary about 8 years ago
- Description updated (diff)
removing the non relevant failure to extract the workunit
Updated by Vasu Kulkarni about 8 years ago
Amazon::S3 is installed in ansible, I remember sometime back there was a issue with CentOS where it didnt insall in standard folders and Perl couldn't locate the Lib, this sounds similar
Updated by Vasu Kulkarni about 8 years ago
From Loic's logs: I see it installed in 5.18.2,
2016-04-06T14:42:01.793 INFO:teuthology.task.ansible.out: TASK: [testnode | Check to see if Amazon::S3 is installed.] ******************* 2016-04-06T14:42:01.830 INFO:teuthology.task.ansible.out:[0;36mskipping: [smithi019.front.sepia.ceph.com][0m 2016-04-06T14:42:01.832 INFO:teuthology.task.ansible.out: 2016-04-06T14:42:01.957 INFO:teuthology.task.ansible.out:[0;32mok: [smithi048.front.sepia.ceph.com] => {"changed": false, "cmd": ["perldoc", "-l", "Amazon::S3"], "delta": "0:00:00.041421", "end": "2016-04-06 21:42:01.948193", "rc": 0, "start": "2016-04-06 21:42:01.906772", "stderr": "", "stdout": "/usr/local/share/perl/5.18.2/Amazon/S3.pm", "stdout_lines": ["/usr/local/share/perl/5.18.2/Amazon/S3.pm"], "warnings": []}[0m 2016-04-06T14:42:01.966 INFO:teuthology.task.ansible.out: TASK: [testnode | Install Amazon::S3.] ****************************************
Updated by Zack Cerza almost 8 years ago
On smithi019:
[ubuntu@smithi019 ~]$ sudo cpan Amazon::S3 Sorry, we have to rerun the configuration dialog for CPAN.pm due to some missing parameters. Configuration will be written to <</root/.cpan/CPAN/MyConfig.pm>> CPAN.pm requires configuration, but most of it can be done automatically. If you answer 'no' below, you will enter an interactive dialog for each configuration option instead. Would you like to configure as much as possible automatically? [yes]
Looks like the ansible we have to install Amazon::S3 isn't handling this interactive prompt.
Updated by Nathan Cutler almost 8 years ago
Hit this in hammer integration testing.
Updated by Nathan Cutler almost 8 years ago
For example: http://pulpito.ceph.com/smithfarm-2016-06-02_00:42:57-rgw-hammer-backports---basic-smithi/229991/
This is causing quite some failures - would be nice if it could be fixed.
Updated by Nathan Cutler almost 8 years ago
- Priority changed from Normal to Urgent
Upgrading priority of bug because of the trouble it is causing in hammer integration testing.
Updated by Nathan Cutler almost 8 years ago
So, I looked into this a little.
As far as I can tell, there is no cpan configuration on the smithis. At least, on smithi018 there is none at all (/root/.cpan directory does not exist), and on smithi019 there was such a directory, but the file "/root/.cpan/CPAN/MyConfig.pm" existed but contained no configuration, just a single line that said "1;"
When I ran the configuration dialog, it populated /root/.cpan - after that, the "sudo cpan Amazon::S3" runs (and completes) as expected, without any need for user input.
As far as I can tell the only thing needed is the file /root/.cpan/CPAN/MyConfig.pm with the following contents:
$CPAN::Config = { 'applypatch' => q[], 'auto_commit' => q[0], 'build_cache' => q[100], 'build_dir' => q[/root/.cpan/build], 'build_dir_reuse' => q[0], 'build_requires_install_policy' => q[yes], 'bzip2' => q[/bin/bzip2], 'cache_metadata' => q[1], 'check_sigs' => q[0], 'colorize_output' => q[0], 'commandnumber_in_prompt' => q[1], 'connect_to_internet_ok' => q[1], 'cpan_home' => q[/root/.cpan], 'ftp_passive' => q[1], 'ftp_proxy' => q[], 'getcwd' => q[cwd], 'gpg' => q[/bin/gpg], 'gzip' => q[/bin/gzip], 'halt_on_failure' => q[0], 'histfile' => q[/root/.cpan/histfile], 'histsize' => q[100], 'http_proxy' => q[], 'inactivity_timeout' => q[0], 'index_expire' => q[1], 'inhibit_startup_message' => q[0], 'keep_source_where' => q[/root/.cpan/sources], 'load_module_verbosity' => q[none], 'make' => q[/bin/make], 'make_arg' => q[], 'make_install_arg' => q[], 'make_install_make_command' => q[/bin/make], 'makepl_arg' => q[], 'mbuild_arg' => q[], 'mbuild_install_arg' => q[], 'mbuild_install_build_command' => q[./Build], 'mbuildpl_arg' => q[], 'no_proxy' => q[], 'pager' => q[/bin/less], 'patch' => q[/bin/patch], 'perl5lib_verbosity' => q[none], 'prefer_external_tar' => q[1], 'prefer_installer' => q[MB], 'prefs_dir' => q[/root/.cpan/prefs], 'prerequisites_policy' => q[follow], 'scan_cache' => q[atstart], 'shell' => q[/bin/bash], 'show_unparsable_versions' => q[0], 'show_upload_date' => q[0], 'show_zero_versions' => q[0], 'tar' => q[/bin/tar], 'tar_verbosity' => q[none], 'term_is_latin' => q[1], 'term_ornaments' => q[1], 'test_report' => q[0], 'trust_test_report_history' => q[0], 'unzip' => q[/bin/unzip], 'urllist' => [q[http://cpan.erlbaum.net/], q[http://cpan.mirror.nac.net/], q[http://mirrors.ccs.neu.edu/CPAN/]], 'use_sqlite' => q[0], 'version_timeout' => q[15], 'wget' => q[/bin/wget], 'yaml_load_code' => q[0], 'yaml_module' => q[YAML], }; 1; __END__
and the following lines appended to /root/.bashrc:
PATH="/root/perl5/bin${PATH:+:${PATH}}"; export PATH; PERL5LIB="/root/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}"; export PERL5LIB; PERL_LOCAL_LIB_ROOT="/root/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}"; export PERL_LOCAL_LIB_ROOT; PERL_MB_OPT="--install_base \"/root/perl5\""; export PERL_MB_OPT; PERL_MM_OPT="INSTALL_BASE=/root/perl5"; export PERL_MM_OPT;
Updated by Nathan Cutler almost 8 years ago
- Project changed from teuthology to sepia
Moving to sepia tracker as it seems to be an infrastructure issue.
Updated by David Galloway almost 8 years ago
- Category set to Infrastructure Service
- Assignee changed from Tamilarasi muthamizhan to David Galloway
Updated by David Galloway almost 8 years ago
- Status changed from 12 to Fix Under Review
CPAN, and thus Amazon::S3 wasn't configured on CentOS testnodes. Added functionality in https://github.com/ceph/ceph-cm-ansible/pull/256
Updated by David Galloway almost 8 years ago
This should be resolved by https://github.com/ceph/ceph-cm-ansible/pull/256
Updated by Anonymous almost 8 years ago
This test worked on Centos 7.0. See ~wusui/s3test-amazon-pm on teuthology.front.sepia.ceph.com.
Updated by David Galloway almost 8 years ago
- Status changed from Fix Under Review to Resolved