Tasks #11644
closedfirefly v0.80.11
0%
Description
Workflow¶
- Preparing the release IN PROGRESS
- Cutting the release
- Nathan asks Sage if a point release should be published OK
- Nathan gets approval from all leads
- Sage writes and commits the release notes IN PROGRESS
- Nathan informs Yuri that the branch is ready for testing DONE
- Yuri runs additional integration tests DONE
- If Yuri discovers new bugs with severity Critical, the release goes back to being prepared, it was not ready after all
- Yuri informs Alfredo that the branch is ready for release DONE
- Alfredo creates the packages and sets the release tag IN PROGERSS
Release information¶
- branch to build from: firefly, commit:c551622ca21fe044bc1083614c45d888a2a34aeb
- version: v0.80.11
- type of release: point release
- where to publish the release: http://ceph.com/debian-firefly and http://ceph.com/rpm-firefly
git --no-pager log --format='%H %s' --graph tags/v0.80.10..ceph/firefly | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
teuthology run commit:d135e397fa182cebde4d96f52c8eb3075e51b74c (firefly-backports august 2015)¶
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 4535
- |\
- | + rgw: use correct objv_tracker for bucket instance
- + Pull request 4765
- |\
- | + rgw: always check if token is expired
- + Pull request 4769
- |\
- | + mon: prevent bucket deletion when referenced by a rule
- + Pull request 4788
- |\
- | + mon/OSDMonitor: require mon_allow_pool_delete = true to remove pools
- + Pull request 4854
- |\
- | + tests: verify librbd blocking aio code path
- | + librbd: new rbd_non_blocking_aio config option
- | + PendingReleaseNotes: document changes to librbd's aio_read methods
- | + tests: update librbd AIO tests to remove result code
- | + librbd: AioRequest::send no longer returns a result
- | + librbd: internal AIO methods no longer return result
- | + Throttle: added pending_error method to SimpleThrottle
- | + librbd: add new fail method to AioCompletion
- | + librbd: avoid blocking AIO API methods
- | + librbd: add task pool / work queue for AIO requests
- | + WorkQueue: added virtual destructor
- | + WorkQueue: add new ContextWQ work queue
- | + common/ceph_context: don't import std namespace
- | + CephContext: Add AssociatedSingletonObject to allow CephContext's singleton
- + Pull request 4861
- |\
- | + rgw: Do not enclose the Bucket header in quotes
- + Pull request 5043
- |\
- | + For pgls OP, get/put budget on per list session basis, instead of per OP basis, which could lead to deadlock.
- + Pull request 5056
- |\
- | + rgw/logrotate.conf: Rename service name
- + Pull request 5199
- |\
- | + mon: always reply mdsbeacon
- | + mon/MDSMonitor: rename labels to a better name
- | + mon: send no_reply() to peon to drop ignored mdsbeacon
- | + mon: remove unnecessary error handling
- + Pull request 5200
- |\
- | + ReplicatedPG::maybe_handle_cache: do not skip promote for write_ordered
- | + osd: promotion on 2nd read for cache tiering
- | + ceph_test_rados_api_tier: test promote-on-second-read behavior
- | + osd/osd_types: be pedantic about encoding last_force_op_resend without feature bit
- + Pull request 5217
- |\
- | + ceph.spec.in: python-argparse only in Python 2.6
- + Pull request 5224
- |\
- | + ceph.spec.in: do not run fdupes, even on SLE/openSUSE
- + Pull request 5225
- |\
- | + ceph.spec.in: use _udevrulesdir to eliminate conditionals
- + Pull request 5232
- |\
- | + rgw: simplify content length handling
- | + rgw: make compatability deconfliction optional.
- | + rgw: improve content-length env var handling
- + Pull request 5233
- |\
- | + Unconditionally chown rados log file.
- + Pull request 5234
- |\
- | + rgw: error out if frontend did not send all data
- + Pull request 5235
- |\
- | + os/chain_xattr: handle read on chnk-aligned xattr
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case
- | + TestPGLog: fix noop log proc_replica_log test case
- | + TestPGLog: add test for 11358
- | + PGLog::proc_replica_log: handle split out overlapping entries
- + Pull request 5358
- |\
- | + mon: PaxosService: call post_refresh() instead of post_paxos_update()
- + Pull request 5360
- |\
- | + mon: MonitorDBStore: get_next_key() only if prefix matches
- + Pull request 5388
- |\
- | + UnittestBuffer: Add bufferlist zero test case
- | + buffer: Fix bufferlist::zero bug with special case
- + Pull request 5389
- |\
- | + OSDMonitor: disallow ec pools as tiers
- + Pull request 5390
- |\
- | + rgw/logrotate.conf: Rename service name
- + Pull request 5394
- |\
- | + ceph.spec.in: drop SUSE-specific %py_requires macro
- + Pull request 5403
- |\
- | + Mutex: fix leak of pthread_mutexattr
- + Pull request 5404
- |\
- | + mon/PGMonitor: bug fix pg monitor get crush rule
- + Pull request 5406
- |\
- | + Log::reopen_log_file: take m_flush_mutex
- + Pull request 5408
- |\
- | + mon: OSDMonitor: fix hex output on 'osd reweight'
- + Pull request 5409
- |\
- | + mon/PGMonitor: avoid uint64_t overflow when checking pool 'target/max' status. Fixes: #12401
- + Pull request 5410
- + Update OSDMonitor.cc
rados¶
./virtualenv/bin/teuthology-suite --priority 1000 --subset 5/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-08-10_12:21:46-rados-firefly-backports---basic-multi/
- environmental noise (Connection failure: timed out) in two jobs: http://pulpito.ceph.com/smithfarm-2015-08-10_12:21:46-rados-firefly-backports---basic-multi/ and 6 more jobs dead
Re-running dead and failed jobs with:
./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-08-27_03:39:40-rados-firefly-backports---basic-multi/
- known bug http://tracker.ceph.com/issues/12870 'Failed to validate the SSL certificate for raw.githubusercontent.com:443. Use validate_certs=False (insecure) or make sure your managed systems have a valid CA certificate installed
- known bug http://tracker.ceph.com/issues/12870 Error getting key from: https://raw.githubusercontent.com/ceph/keys/autogenerated/ssh/@all.pub
- plus 5 more dead -> according to jluis, these are caused by PR#5360.
Reset integration branch, omitting PR#5360. Re-running dead and failed jobs:
./virtualenv/bin/teuthology-suite --priority 101 --subset 5/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"
- red http://pulpito.ceph.com/smithfarm-2015-09-02_02:32:32-rados-firefly-backports---basic-multi/
- one job succeeded, killed the rest because I forgot to wait for gitbuilder
Re-run again (really omitting PR#5360 this time):
- red http://pulpito.ceph.com/smithfarm-2015-09-02_14:07:02-rados-firefly-backports---basic-multi/
- environmental problem The command was not found or was not executable: ansible-playbook. Reported the issue here: http://tracker.ceph.com/issues/12926
- just one test left; all others have passed
Re-run the last failed job on the assumption that missing ansible-playbook was "environmental noise":
rgw¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18
fs¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18
- red http://pulpito.ceph.com/smithfarm-2015-08-10_12:31:36-fs-firefly-backports-testing-basic-multi/
- environmental noise ConnectionLostError: SSH connection to plana04 was lost: http://pulpito.ceph.com/smithfarm-2015-08-10_12:31:36-fs-firefly-backports-testing-basic-multi/1008824/ - plus 7 dead jobs
Re-running dead and failed jobs with:
./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18 --filter="$filter"
- green http://pulpito.ceph.com/smithfarm-2015-08-27_03:43:47-fs-firefly-backports-testing-basic-multi/
rbd¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18
- red http://pulpito.ceph.com/smithfarm-2015-08-10_12:32:34-rbd-firefly-backports---basic-multi/
- 1 failed and 1 dead job, otherwise all passed
Re-running 1 failed and 1 dead job:
./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18 --filter="$filter"
powercycle¶
./virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz
- red http://pulpito.ceph.com/smithfarm-2015-08-10_12:35:46-powercycle-firefly-backports-testing-basic-multi/
- environmental problem hopefully fixed by https://github.com/ceph/ceph-cm-ansible/pull/109
Re-run with the same command:
- red http://pulpito.ceph.com/smithfarm-2015-08-31_04:51:54-powercycle-firefly-backports-testing-basic-multi/
- both dead
upgrade¶
./virtualenv/bin/teuthology-suite --machine-type plana,burnupi,mira --suite upgrade/firefly --filter=ubuntu_14.04 --suite-branch firefly --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-08-10_12:44:04-upgrade:firefly-firefly-backports---basic-multi/
- 10 dead, 11 passed
Re-running the 10 dead jobs with:
./virtualenv/bin/teuthology-suite --machine-type plana,burnupi,mira --suite upgrade/firefly --suite-branch firefly --ceph firefly-backports --filter="$filter"
Running also on CentOS 6 to verify the sanity of the rpm packages
./virtualenv/bin/teuthology-suite --machine-type plana,burnupi,mira --suite upgrade/firefly --filter=centos_6 --suite-branch firefly --ceph firefly-backports --email ncutler@suse.cz
- red http://pulpito.ceph.com/loic-2015-09-02_07:36:47-upgrade:firefly-firefly-backports---basic-multi/
- all jobs dead
Re-running on CentOS with -m vps
:
- red http://pulpito.ceph.com/smithfarm-2015-09-04_05:32:56-upgrade:firefly-firefly-backports---basic-vps/
- environmental noise "yum -y install ..." failures
Re-running the four failed jobs on CentOS with -m vps
:
- green http://pulpito.ceph.com/smithfarm-2015-09-06_01:24:25-upgrade:firefly-firefly-backports---basic-vps/
ceph-deploy¶
./virtualenv/bin/teuthology-suite --machine-type vps --suite ceph-deploy --suite-branch firefly --ceph firefly-backports --email ncutler@suse.cz
teuthology run commit:b2aaddd3a06ac13c46df659e1f2b3119f5675802 (firefly-backports july 2015)¶
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 4535
- |\
- | + rgw: use correct objv_tracker for bucket instance
- + Pull request 4582
- |\
- | + ceph_argparse_flag has no regular 3rd parameter.
- + Pull request 4583
- |\
- | + Variable length array of std::strings (not legal in C++) changed to std::vector<std::string>
- + Pull request 4584
- |\
- | + rgw: swift GET / HEAD object returns X-Timestamp field
- + Pull request 4597
- |\
- | + osdc: add epoch_t last_force_resend in Op/LingerOp.
- + Pull request 4630
- |\
- | + ceph-disk: more robust parted output parser
- + Pull request 4631
- + Pull request 4632
- |\
- | + osd: refuse to write a new erasure coded object with an offset > 0
- + Pull request 4633
- + Pull request 4635
- |\
- | + json_sprit: fix the FTBFS on old gcc
- | + json_spirit: use utf8 intenally when parsing \uHHHH
- + Pull request 4636
- |\
- | + Fix disk zap sgdisk invocation
- + Pull request 4639
- |\
- | + librbd: updated cache max objects calculation
- + Pull request 4641
- |\
- | + rgw: remove meta file after deleting bucket The meta file is deleted only if the bucket meta data is not synced
- + Pull request 4642
- |\
- | + rgw: quota not respected in POST object
- + Pull request 4762
- |\
- | + rgw: Use attrs from source bucket on copy
- + Pull request 4765
- |\
- | + rgw: always check if token is expired
- + Pull request 4769
- |\
- | + mon: prevent bucket deletion when referenced by a rule
- + Pull request 4771
- |\
- | + ceph-disk: support NVMe device partitions
- + Pull request 4867
- |\
- | + Always provide summary for non-healthy cluster.
- + Pull request 5037
- |\
- | + Makefile: install ceph-post-file keys with mode 600
- | + ceph-post-file: improve check for a source install
- | + ceph-post-file: behave when sftp doesn't take -i
- + Pull request 5039
- |\
- | + osd: Cleanup boost optionals
- + Pull request 5044
- |\
- | + The fix for issue 9614 was not completed, as a result, for those erasure coded PGs with one OSD down, the state was wrongly marked as active+clean+degraded. This patch makes sure the clean flag is not set for such PG. Signed-off-by: Guang Yang <yguang@yahoo-inc.com>
- + Pull request 5051
- |\
- | + osd: cache pool: flush object ignoring cache min flush age when cache pool is full Signed-off-by: Xinze Chi <xmdxcxz@gmail.com>
- | + osd: add local_mtime to struct object_info_t
- + Pull request 5056
- |\
- | + rgw/logrotate.conf: Rename service name
- + Pull request 5062
- |\
- | + Objecter: resend linger ops on any interval change
- | + osd_types: factor out is_new_interval from check_new_interval
- + Pull request 5129
- |\
- | + mon: remove unused variable
- + Pull request 5170
- |\
- | + rgw: send Content-Length in response for HEAD on Swift account.
- | + rgw: send Content-Length in response for DELETE on Swift container.
- | + rgw: send Content-Length in response for PUT on Swift container.
- | + rgw: send Content-Length in response for GET on Swift container.
- | + rgw: enable end_header() to handle proposal of Content-Length.
- + Pull request 5171
- + librbd: assertion failure race condition if watch disconnected
rados¶
./virtualenv/bin/teuthology-suite --priority 1000 --subset 1/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
rgw¶
run=loic-2015-07-15_14:21:51-rgw-firefly-backports---basic-multi
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --filter="$filter" --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
run=loic-2015-07-09_21:11:18-rgw-firefly-backports---basic-multi eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') ./virtualenv/bin/teuthology-suite --priority 1000 --filter="$filter" --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-07-15_14:21:51-rgw-firefly-backports---basic-multi/
- environmental noise skipping: no hosts matched
- environmental noise FATAL: all hosts have already failed -- aborting
./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-07-09_21:11:18-rgw-firefly-backports---basic-multi
- known bug Couldn't find index page for 'boto'
fs¶
run="loic-2015-07-16_13:13:38-fs-firefly-backports-testing-basic-multi"
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- green http://pulpito.ceph.com/smithfarm-2015-07-21_02:16:37-fs-firefly-backports-testing-basic-multi/
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- one dead, otherwise green http://pulpito.ceph.com/loic-2015-07-16_13:13:38-fs-firefly-backports-testing-basic-multi/
./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-07-09_21:07:09-fs-firefly-backports-testing-basic-multi
- environmental problem http://pulpito.ceph.com/loic-2015-07-09_21:07:09-fs-firefly-backports-testing-basic-multi/967283/
2015-07-14T02:20:26.544 INFO:teuthology.misc:trying to connect to ubuntu@plana79.front.sepia.ceph.com 2015-07-14T02:20:26.544 INFO:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'plana79.front.sepia.ceph.com', 'timeout': 60} 2015-07-14T02:20:28.542 DEBUG:teuthology.orchestra.remote:[Errno 113] No route to host
- environmental problem http://pulpito.ceph.com/loic-2015-07-09_21:07:09-fs-firefly-backports-testing-basic-multi/967319/
2015-07-14T04:31:29.497 DEBUG:teuthology.misc:waited 63.8534479141 2015-07-14T04:31:30.498 INFO:teuthology.misc:trying to connect to ubuntu@plana69.front.sepia.ceph.com 2015-07-14T04:31:30.498 INFO:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'plana69.front.sepia.ceph.com', 'timeout': 60} 2015-07-14T04:32:30.498 DEBUG:teuthology.orchestra.remote:timed out
- environmental problem http://pulpito.ceph.com/loic-2015-07-09_21:07:09-fs-firefly-backports-testing-basic-multi/967283/
rbd¶
run=loic-2015-07-09_16:57:58-rbd-firefly-backports---basic-multi
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --filter="$filter" --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-07-09_16:57:58-rbd-firefly-backports---basic-multi
- environmental noise adjust-ulimits ceph-coverage command fails in two different ways
2015-07-14T02:59:32.995 INFO:tasks.qemu:starting qemu... 2015-07-14T02:59:32.995 INFO:teuthology.orchestra.run.mira077:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term qemu-system-x86_64 -enable-kvm -nographic -m 4096 -drive file=/home/ubuntu/cephtest/qemu/base.client.0.qcow2,format=qcow2,if=virtio -cdrom /home/ubuntu/cephtest/qemu/client.0.iso -drive file=rbd:rbd/client.0.0:id=0,format=raw,if=virtio,cache=none' 2015-07-14T02:59:32.997 DEBUG:teuthology.run_tasks:Unwinding manager qemu 2015-07-14T02:59:32.998 INFO:tasks.qemu:waiting for qemu tests to finish... 2015-07-14T02:59:33.096 INFO:tasks.qemu.client.0.mira077.stderr:Traceback (most recent call last): 2015-07-14T02:59:33.096 INFO:tasks.qemu.client.0.mira077.stderr: File "/usr/bin/daemon-helper", line 66, in <module> 2015-07-14T02:59:33.096 INFO:tasks.qemu.client.0.mira077.stderr: preexec_fn=os.setsid,
- environmental noise adjust-ulimits ceph-coverage command fails in two different ways
powercyle¶
../virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz
- green http://pulpito.ceph.com/loic-2015-07-09_21:15:54-powercycle-firefly-backports-testing-basic-multi
upgrade¶
teuthology-openstack --verbose --key-name loic --suite upgrade/firefly --filter=ubuntu_14.04 --suite-branch firefly --ceph firefly-backports
teuthology run commit:73cad63e89ebeb9de47c7f2a2187deff8c39d590 (firefly-backports may 2015)¶
rados¶
./virtualenv/bin/teuthology-suite --priority 1000 --subset 1/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-05-29_16:20:09-rados-firefly-backports---basic-multi/
- known bug ceph-objectstore-tool: import failure with status 139
- environmental problem sudo rm
f -/etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart - known bug [ERR] 3.0s0 scrub stat mismatch, got 2005/2009 objects, 0/0 clones, 2005/2009 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 3981889330/3988381157 bytes. in cluster log
Updated by Nathan Cutler almost 9 years ago
- Description updated (diff)
s/Loic/Nathan/g in Workflow section
Updated by Nathan Cutler almost 9 years ago
- Priority changed from Normal to Urgent
Updated by Loïc Dachary almost 9 years ago
- Description updated (diff)
- Priority changed from Urgent to Normal
Updated by Loïc Dachary almost 9 years ago
- Priority changed from Normal to Urgent
Updated by Loïc Dachary almost 9 years ago
- Translation missing: en.field_release set to 8
Updated by Loïc Dachary almost 9 years ago
- Target version changed from 474 to v0.80.11
Updated by Nathan Cutler almost 9 years ago
- Status changed from New to In Progress
Updated by Loïc Dachary almost 9 years ago
- Subject changed from firefly 0.80.11 to firefly v0.80.11
- Description updated (diff)
Updated by Nathan Cutler over 8 years ago
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5410
- |\
- | + Update OSDMonitor.cc
- + Pull request 5409
- |\
- | + mon/PGMonitor: avoid uint64_t overflow when checking pool 'target/max' status. Fixes: #12401
- + Pull request 5408
- |\
- | + mon: OSDMonitor: fix hex output on 'osd reweight'
- + Pull request 5406
- |\
- | + Log::reopen_log_file: take m_flush_mutex
- + Pull request 5404
- |\
- | + mon/PGMonitor: bug fix pg monitor get crush rule
- + Pull request 5403
- |\
- | + Mutex: fix leak of pthread_mutexattr
- + Pull request 5394
- |\
- | + ceph.spec.in: drop SUSE-specific %py_requires macro
- + Pull request 5389
- |\
- | + OSDMonitor: disallow ec pools as tiers
- + Pull request 5388
- |\
- | + UnittestBuffer: Add bufferlist zero test case
- | + buffer: Fix bufferlist::zero bug with special case
- + Pull request 5358
- |\
- | + mon: PaxosService: call post_refresh() instead of post_paxos_update()
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case
- | + TestPGLog: fix noop log proc_replica_log test case
- | + TestPGLog: add test for 11358
- | + PGLog::proc_replica_log: handle split out overlapping entries
- + Pull request 5235
- |\
- | + os/chain_xattr: handle read on chnk-aligned xattr
- + Pull request 5234
- |\
- | + rgw: error out if frontend did not send all data
- + Pull request 5232
- |\
- | + rgw: simplify content length handling
- | + rgw: make compatability deconfliction optional.
- | + rgw: improve content-length env var handling
- + Pull request 5225
- |\
- | + ceph.spec.in: use _udevrulesdir to eliminate conditionals
- + Pull request 5224
- |\
- | + ceph.spec.in: do not run fdupes, even on SLE/openSUSE
- + Pull request 5217
- |\
- | + ceph.spec.in: python-argparse only in Python 2.6
- + Pull request 5200
- |\
- | + ReplicatedPG::maybe_handle_cache: do not skip promote for write_ordered
- | + osd: promotion on 2nd read for cache tiering
- | + ceph_test_rados_api_tier: test promote-on-second-read behavior
- | + osd/osd_types: be pedantic about encoding last_force_op_resend without feature bit
- + Pull request 5199
- |\
- | + mon: always reply mdsbeacon
- | + mon/MDSMonitor: rename labels to a better name
- | + mon: send no_reply() to peon to drop ignored mdsbeacon
- | + mon: remove unnecessary error handling
- + Pull request 5056
- |\
- | + rgw/logrotate.conf: Rename service name
- + Pull request 5043
- |\
- | + For pgls OP, get/put budget on per list session basis, instead of per OP basis, which could lead to deadlock.
- + Pull request 4861
- |\
- | + rgw: Do not enclose the Bucket header in quotes
- + Pull request 4854
- + Pull request 4788
- |\
- | + mon/OSDMonitor: require mon_allow_pool_delete = true to remove pools
- + Pull request 4769
- |\
- | + mon: prevent bucket deletion when referenced by a rule
- + Pull request 4535
- + rgw: use correct objv_tracker for bucket instance
Updated by Loïc Dachary over 8 years ago
$ git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph
/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5410
- |\
- | + Update OSDMonitor.cc
- + Pull request 5409
- |\
- | + mon/PGMonitor: avoid uint64_t overflow when checking pool 'target/max' status. Fixes: #12401
- + Pull request 5408
- |\
- | + mon: OSDMonitor: fix hex output on 'osd reweight'
- + Pull request 5406
- + Pull request 5404
- |\
- | + mon/PGMonitor: bug fix pg monitor get crush rule
- + Pull request 5403
- |\
- | + Mutex: fix leak of pthread_mutexattr
- + Pull request 5394
- + Pull request 5389
- + Pull request 5388
- |\
- | + UnittestBuffer: Add bufferlist zero test case
- | + buffer: Fix bufferlist::zero bug with special case
- + Pull request 5358
- |\
- | + mon: PaxosService: call post_refresh() instead of post_paxos_update()
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case
- | + TestPGLog: fix noop log proc_replica_log test case
- | + TestPGLog: add test for 11358
- | + PGLog::proc_replica_log: handle split out overlapping entries
- + Pull request 5235
- + Pull request 5234
- |\
- | + rgw: error out if frontend did not send all data
- + Pull request 5232
- |\
- | + rgw: simplify content length handling
- | + rgw: make compatability deconfliction optional.
- | + rgw: improve content-length env var handling
- + Pull request 5225
- |\
- | + ceph.spec.in: use _udevrulesdir to eliminate conditionals
- + Pull request 5224
- + Pull request 5217
- |\
- | + ceph.spec.in: python-argparse only in Python 2.6
- + Pull request 5200
- |\
- | + ReplicatedPG::maybe_handle_cache: do not skip promote for write_ordered
- | + osd: promotion on 2nd read for cache tiering
- | + ceph_test_rados_api_tier: test promote-on-second-read behavior
- | + osd/osd_types: be pedantic about encoding last_force_op_resend without feature bit
- + Pull request 5199
- |\
- | + mon: always reply mdsbeacon
- | + mon/MDSMonitor: rename labels to a better name
- | + mon: send no_reply() to peon to drop ignored mdsbeacon
- | + mon: remove unnecessary error handling
- + Pull request 5056
- |\
- | + rgw/logrotate.conf: Rename service name
- + Pull request 5043
- + Pull request 4861
- |\
- | + rgw: Do not enclose the Bucket header in quotes
- + Pull request 4854
- + Pull request 4788
- + Pull request 4769
- + Pull request 4535
- + rgw: use correct objv_tracker for bucket instance
Updated by Nathan Cutler over 8 years ago
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5410
- + Pull request 5409
- + Pull request 5408
- + Pull request 5406
- + Pull request 5404
- + Pull request 5403
- + Pull request 5394
- + Pull request 5389
- + Pull request 5388
- |\
- | + UnittestBuffer: Add bufferlist zero test case /commit/37d16a9e572580eeae86a2bae6d4ddd0299fb833
- | + buffer: Fix bufferlist::zero bug with special case /commit/4443acdbef1148e0261bce25f7d7a3433e09cecc
- + Pull request 5358
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case /commit/8c02376bd58d463f742966b67fa075a59b5f4269
- | + TestPGLog: fix noop log proc_replica_log test case /commit/bba50ce8f227af29d559b486274871bb3999fb24
- | + TestPGLog: add test for 11358 /commit/fdff8ce6c996cda7b3966d20c24b20ff545e468a
- | + PGLog::proc_replica_log: handle split out overlapping entries /commit/65028b6304235ba5fa54d14805028db1a032e5a0
- + Pull request 5235
- + Pull request 5234
- |\
- | + rgw: error out if frontend did not send all data /commit/4e7de5b5f0e32d1183e2a0490d65e4e01490d942
- + Pull request 5232
- |\
- | + rgw: simplify content length handling /commit/d5d033419d42f78afe5697ed8b030d95daa8d07f
- | + rgw: make compatability deconfliction optional. /commit/493af9072e6bb07064e15bfe7badc7643e7458da
- | + rgw: improve content-length env var handling /commit/7d9b4f14597463d4c0f9755163c9a47ae7a37ef3
- + Pull request 5225
- + Pull request 5224
- + Pull request 5217
- + Pull request 5200
- |\
- | + ReplicatedPG::maybe_handle_cache: do not skip promote for write_ordered /commit/54264210f4ebec23b08dd6712e09aea49543b52b
- | + osd: promotion on 2nd read for cache tiering /commit/7e2526784203b0f1bce08869aa7b1fda9c5eedd9
- | + ceph_test_rados_api_tier: test promote-on-second-read behavior /commit/66f61cd9ae105948f653fd888812df270ff1e832
- | + osd/osd_types: be pedantic about encoding last_force_op_resend without feature bit /commit/a8f3d6e1f1f186cbe2299566a575bf5a40500227
- + Pull request 5199
- + Pull request 5056
- |\
- | + rgw/logrotate.conf: Rename service name /commit/00c94897368dc93a852ef6e6277e70405e7513ad
- + Pull request 5043
- + Pull request 4861
- |\
- | + rgw: Do not enclose the Bucket header in quotes /commit/dcb1b5bf7523e524fb7bd2fa603257514bbc2cbf
- + Pull request 4854
- + Pull request 4788
- + Pull request 4769
- + Pull request 4535
- + rgw: use correct objv_tracker for bucket instance /commit/f6022639758ec13b9a25b03cd831882db0b517b3
Updated by Nathan Cutler over 8 years ago
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5410
- + Pull request 5409
- + Pull request 5408
- + Pull request 5406
- + Pull request 5404
- + Pull request 5403
- + Pull request 5394
- + Pull request 5389
- + Pull request 5388
- + Pull request 5358
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case /commit/8c02376bd58d463f742966b67fa075a59b5f4269
- | + TestPGLog: fix noop log proc_replica_log test case /commit/bba50ce8f227af29d559b486274871bb3999fb24
- | + TestPGLog: add test for 11358 /commit/fdff8ce6c996cda7b3966d20c24b20ff545e468a
- | + PGLog::proc_replica_log: handle split out overlapping entries /commit/65028b6304235ba5fa54d14805028db1a032e5a0
- + Pull request 5235
- + Pull request 5234
- |\
- | + rgw: error out if frontend did not send all data /commit/4e7de5b5f0e32d1183e2a0490d65e4e01490d942
- + Pull request 5232
- |\
- | + rgw: simplify content length handling /commit/d5d033419d42f78afe5697ed8b030d95daa8d07f
- | + rgw: make compatability deconfliction optional. /commit/493af9072e6bb07064e15bfe7badc7643e7458da
- | + rgw: improve content-length env var handling /commit/7d9b4f14597463d4c0f9755163c9a47ae7a37ef3
- + Pull request 5225
- + Pull request 5224
- + Pull request 5217
- + Pull request 5200
- + Pull request 5199
- + Pull request 5056
- |\
- | + rgw/logrotate.conf: Rename service name /commit/00c94897368dc93a852ef6e6277e70405e7513ad
- + Pull request 5043
- + Pull request 4861
- |\
- | + rgw: Do not enclose the Bucket header in quotes /commit/dcb1b5bf7523e524fb7bd2fa603257514bbc2cbf
- + Pull request 4854
- + Pull request 4788
- + Pull request 4769
- + Pull request 4535
- + rgw: use correct objv_tracker for bucket instance /commit/f6022639758ec13b9a25b03cd831882db0b517b3
Updated by Nathan Cutler over 8 years ago
#12942 needs to go into 0.80.11 so we will need another integration round
Updated by Nathan Cutler over 8 years ago
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5218
- |\
- | + ceph.spec.in: tweak ceph-common for SUSE/openSUSE /commit/a4f7c933117b6f2126fbbb8302bfc74a57f9eace
- + Pull request 5236
- |\
- | + mon: prevent pool with snapshot state from being used as a tier /commit/9f1ee586991a3e039f0705e225ca18addd61e373
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case /commit/8c02376bd58d463f742966b67fa075a59b5f4269
- | + TestPGLog: fix noop log proc_replica_log test case /commit/bba50ce8f227af29d559b486274871bb3999fb24
- | + TestPGLog: add test for 11358 /commit/fdff8ce6c996cda7b3966d20c24b20ff545e468a
- | + PGLog::proc_replica_log: handle split out overlapping entries /commit/65028b6304235ba5fa54d14805028db1a032e5a0
- + Pull request 5360
- |\
- | + mon: MonitorDBStore: make get_next_key() work properly /commit/de53addac8234037a66cdd45cf8007deba7a0530
- | + mon: MonitorDBStore: get_next_key() only if prefix matches /commit/d52187019d321fe8a2dc54fe8a67a5139c310db1
- + Pull request 5407
- |\
- | + osd: pg_interval_t::check_new_interval should not rely on pool.min_size to determine if the PG was active /commit/150092aaf941a188b8fed256e05886440c24280c
- | + osd: Move IsRecoverablePredicate/IsReadablePredicate to osd_types.h /commit/2a23072f9a2c20e3b08d93c68d000fed1fdaf67b
- | + common/hobject.cc: explicitly typecast p.value_.get_int() /commit/9e4bfbee2b87063bf5877559207af8c01f5674b8
- | + ceph_objectstore_tool.cc: change ghobject_t::NO_SHARD to shard_id_t::NO_SHARD /commit/53a89c2c253a98f4e00744ae7d907f1d8058bb37
- | + osd: replace shard_t usage with shard_id_t /commit/4e9314b9515f9cf385beeb4120f21ce726cad86b
- | + osd: explicit shard_id_t() and NO_SHARD /commit/30420a543fe7cdcc04fa469afcb4e1bce8f33417
- | + osd: loop over uint8_t instead of shard_id_t /commit/de29a9ca4f5230b0200a6d56d5bede5193f076b2
- | + osd: factorize shard_id_t/shard_t into a struct /commit/c705f87a8871a3c7b78a080799b45079a57bb46e
- | + common: WRITE_{EQ,CMP}_OPERATORS_1 /commit/ef4b5ddca17c833b042348837b35e8dc6ac72c35
- + Pull request 5526
- |\
- | + osd/OSDMap: handle incrementals that modify+del pool /commit/ebba1d59b5e4bc11cbdfcda4e480639f7d9e1498
- + Pull request 5529
- |\
- | + common/syncfs: fall back to sync(2) if syncfs(2) not available /commit/d0d6727762ebda858065101635935df3d44a18ad
- | + sync_filesystem.h: fix unreachable code /commit/40494c6e479c2ec4dfe5f6c2d6aef3b6fa841620
- | + mon, os: check the result of sync_filesystem. /commit/3dbbc86ad6d1e7131bbe49a4eff1557d7da9822f
- | + common: Don't call ioctl(BTRFS_IOC_SYNC) in sync_filesystem. /commit/7fa6fdc6c5e52f11456e4bea4ae32fd62248c80b
- | + common: Directly return the result of syncfs(). /commit/9f15ed5bb5a837727cb3bef70508e056c125a518
- + Pull request 5541
- |\
- | + ceph-disk: don't change the journal partition uuid /commit/04b2a878b1329202758465cf8e9b0f874cbeeef5
- | + ceph-disk: set guid if reusing a journal partition /commit/e57e6f5da10a62f2f4d7b1a6a734a095ed494ebe
- + Pull request 5619
- |\
- | + os/FileJournal: Fix journal write fail, align for direct io /commit/278d732ecd3594cd76e172d78ce3ec84e58e178b
- + Pull request 5698
- |\
- | + mon: add a cache layer over MonitorDBStore /commit/411769c1461c11611b479bd826c72c56b3ce47c5
- + Pull request 5726
- |\
- | + Objecter: pg_interval_t::is_new_interval needs pgid from previous pool /commit/d3c94698e4e852bef3e65fbf439f5f209fbc0b25
- | + osd_types::is_new_interval: size change triggers new interval /commit/56d267b7ae02070a7d7ed247990b84124fd62411
- + Pull request 5729
- |\
- | + rgw: init some manifest fields when handling explicit objs /commit/644f213c672f6fe2786e041043fdd55f8399871e
- + Pull request 5730
- |\
- | + rgw: url encode exposed bucket /commit/e7931a73df1ab77feb1c2ece13e3de3989ef7a0e
- | + rgw: Do not enclose the Bucket header in quotes /commit/558c52955d464827630e0aa2fed970df987bb036
- + Pull request 5813
- |\
- | + tests: tiering agent and proxy read /commit/2c0d7feeb1c7592887e0408fe4fadaa9b4f659e9
- | + osd: trigger the cache agent after a promotion /commit/aa911767d9326c8aa37671883892b7d383596960
- + Pull request 5814
- |\
- | + config: skip lockdep for intentionally recursive md_config_t lock /commit/2c2ffa1d6d1112dbf52cbbe36f4a5376e17da56a
- + Pull request 5815
- |\
- | + osd: Keep a reference count on Connection while calling send_message() /commit/f39c7917a39e445efa8d73178657fc5960772275
- + Pull request 5820
- |\
- | + osd/PGLog: dirty_to is inclusive /commit/cd1396cd62c79b177e46cfb57ab6b3b6fdcd227b
- + Pull request 5822
- |\
- | + WBThrottle::clear_object: signal if we cleared an object /commit/b894b368790de3383295372250888ba674502fb1
- + Pull request 5823
- |\
- | + OSD: add scrub_finalize_wq suicide timeout /commit/bff2f477c4ad86b4bd6e3ca3e637a6168c5c8053
- | + OSD: add scrub_wq suicide timeout /commit/91d4c217e32b8b76fcac49f37879a3f78088694d
- | + OSD: add op_wq suicide timeout /commit/7f6ec65b7c2ca0174142c1c48f18998d8c586b02
- | + OSD: add remove_wq suicide timeout /commit/9ce8ce689009cf8ef749edd320d1c2a73ecc2f90
- | + OSD: add snap_trim_wq suicide timeout /commit/6926a64fbd4718b8a5df8e04545bc93c4981e413
- | + OSD: add recovery_wq suicide timeout /commit/d31d1f6f5b08362779fa6af72690e898d2407b90
- | + OSD: add command_wq suicide timeout /commit/f85ec2a52e969f9a7927d0cfacda6a1cc6f2898c
- + Pull request 5831
- |\
- | + rgw: init script waits until the radosgw stops /commit/3be204f6a22be109d2aa8cfd5cee09ec3381d9b2
- + Pull request 5988
- |\
- | + PG::handle_advance_map: on_pool_change after handling the map change /commit/48c929e689b0fa5138922fcb959be5d05296e59a
- + Pull request 5989
- |\
- | + Common/Thread: pthread_attr_destroy(thread_attr) when done with it /commit/9f7e9c863bc91a1f1ffa05ccec8309e384b331d5
- + Pull request 5991
- |\
- | + WorkQueue: add/remove_work_queue methods now thread safe /commit/8c14cad0895590f19a6640c945b52213f30a9671
- + Pull request 5992
- |\
- | + upstart: limit respawn to 3 in 30 mins /commit/20ad17d271fb443f6c40591e205e880b5014a4f3
- + Pull request 5993
- |\
- | + doc: correct links to download.ceph.com /commit/f71a6ebf1b371f9389865a0a33652841726ff77b
- + Pull request 5995
- |\
- | + Fix casing of Content-Type header /commit/3a701e3ab5d0097a0c990ef23b5971dee4e085a6
- | + rgw: shouldn't return content-type: application/xml if content length is 0 /commit/f13e465b49a2e3288a7847804d3a9635a01c8dda
- + Pull request 5997
- |\
- | + rgw: use strict_strtoll() for content length /commit/86f9e55f0c151c0b9a289b475f87b6a11329e6e1
- + Pull request 6010
- + mon: handle case where mon_globalid_prealloc > max_global_id /commit/6d82eb165fdc91851f702a463022b26c50f5094b
- + mon: change mon_globalid_prealloc to 10000 /commit/f545a0f430bf6f1e26983fc0ff20a645697f017c
Updated by Loïc Dachary over 8 years ago
rados¶
./virtualenv/bin/teuthology-suite --priority 1000 --subset 13/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-09-29_16:59:40-rados-firefly-backports---basic-multi/
- known bug http://tracker.ceph.com/issues/13300
- LibRadosTwoPoolsPP tests returning ENOTEMPTY (Directory not empty)
- known bug ceph.newdream.net is no more http://tracker.ceph.com/issues/13357
- In, e.g.,
rados/thrash/{clusters/fixed-2.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}
CommandFailedError: Command failed on burnupi64 with status 4: "wget -q -O /home/ubuntu/cephtest/admin_socket_client.0/objecter_requests -- 'http://ceph.newdream.net/git/?p=ceph.git;a=blob_plain;f=src/test/admin_socket/objecter_requests;hb=firefly-backports' && chmod u=rx -- /home/ubuntu/cephtest/admin_socket_client.0/objecter_requests"
- Fixed by loicd in ceph-qa-suite wip branch, successfully tested with:
./virtualenv/bin/teuthology-suite --priority 101 --subset 13/18 --suite rados --suite-branch wip-git-ceph-com-firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="rados/thrash/{clusters/fixed-2.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}"
http://pulpito.ceph.com/smithfarm-2015-10-05_02:57:39-rados-firefly-backports---basic-multi/
- In, e.g.,
- known bug http://tracker.ceph.com/issues/13300
Rescheduling with -s rados/verify --filter valgrind
to see if valgrind failures are still there in the latest firefly branch:
./virtualenv/bin/teuthology-suite --priority 1000 --suite rados/verify --filter valgrind --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-10-05_05:51:51-rados:verify-firefly-backports---basic-multi/
- all failed -> need a new integration testing round
Updated by Loïc Dachary over 8 years ago
rgw¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18
- red http://pulpito.ceph.com/loic-2015-09-29_17:02:25-rgw-firefly-backports---basic-multi
"SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"
2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:====================================================================== 2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:ERROR: testContainerSerializedInfo (test.functional.tests.TestAccount) 2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:---------------------------------------------------------------------- 2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:Traceback (most recent call last): 2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr: File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 233, in testContainerSerializedInfo 2015-09-30T14:32:47.553 INFO:teuthology.orchestra.run.burnupi51.stderr: self.assertEquals(headers['content-type'], 2015-09-30T14:32:47.553 INFO:teuthology.orchestra.run.burnupi51.stderr:KeyError: 'content-type'
- saw valgrind issues
- rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests.yaml validater/valgrind.yaml}
- rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests_multiregion.yaml validater/valgrind.yaml}
2015-09-30T14:50:36.368 INFO:tasks.ceph:Checking for errors in any valgrind logs... 2015-09-30T14:50:36.369 INFO:teuthology.orchestra.run.plana93:Running: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 2015-09-30T14:50:36.435 INFO:teuthology.orchestra.run.plana31:Running: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 2015-09-30T14:50:36.482 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/client.0.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/client.0.log.gz 2015-09-30T14:50:36.483 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mon.a.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/mon.a.log.gz 2015-09-30T14:50:36.483 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mon.c.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/mon.c.log.gz 2015-09-30T14:50:36.483 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.0.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.0.log.gz 2015-09-30T14:50:36.484 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.1.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.484 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.1.log.gz 2015-09-30T14:50:36.484 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.2.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.484 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.2.log.gz 2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mds.a.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mds.a.log.gz kind <kind>Leak_PossiblyLost</kind> 2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mon.b.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.485 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/mon.b.log.gz 2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.3.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.486 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.3.log.gz 2015-09-30T14:50:36.486 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.4.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.486 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.4.log.gz 2015-09-30T14:50:36.486 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.5.log.gz kind <kind>Leak_DefinitelyLost</kind> 2015-09-30T14:50:36.486 ERROR:tasks.ceph:saw valgrind issue <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.5.log.gz 2015-09-30T14:50:36.486 ERROR:teuthology.run_tasks:Manager failed: ceph
Updated by Loïc Dachary over 8 years ago
fs¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 17/18
- red http://pulpito.ceph.com/loic-2015-09-29_17:03:15-fs-firefly-backports-testing-basic-multi
- environmental noise
fatal: unable to connect to ceph.com: errno=Connection timed out
Running: 'rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0' // Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 // Running: 'rmdir -- /home/ubuntu/cephtest/mnt.0' // rmdir: failed to remove ˜/home/ubuntu/cephtest/mnt.0: Device or resource busy
- environmental noise
Updated by Loïc Dachary over 8 years ago
rbd¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 9/18
- red http://pulpito.ceph.com/loic-2015-09-29_17:04:19-rbd-firefly-backports---basic-multi
- environmental noise
Cloning into 'qemu'... // fatal: unable to connect to apt-mirror.front.sepia.ceph.com: // errno=Connection refused
- environmental noise
Updated by Loïc Dachary over 8 years ago
powercycle¶
./virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz
Updated by Loïc Dachary over 8 years ago
upgrade¶
./virtualenv/bin/teuthology-suite --machine-type vps --suite upgrade/firefly --filter=ubuntu_14.04,centos_6 --suite-branch firefly --ceph firefly-backports
- red http://pulpito.ceph.com/loic-2015-09-27_01:05:46-upgrade:firefly-firefly-backports---basic-vps
- environmental problem
Downloading/unpacking pip from https://pypi.python.org/packages/source/p/pip/pip-7.1.2.tar.gz#md5=3823d2343d9f3aaab21cf9c917710196 // Error <urlopen error The read operation timed out>
- unknown problem
Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph status --format=json-pretty' 2015-09-28T13:41:40.652 INFO:tasks.thrashosds.ceph_manager:no progress seen, keeping timeout for now // AssertionError: failed to recover before timeout expired
- environmental problem (yum)
Connecting to ceph.com|173.236.248.54|:80... failed: Connection timed out. // Connecting to ceph.com|2607:f298:6050:51f3:f816:3eff:fe62:31d3|:80... failed: Network is unreachable. 2015-09-28T08:57:31.474 INFO:tasks.workunit:Stopping suites/blogbench.sh on client.0...
- environmental problem
Running: 'sudo yum install ceph-debuginfo -y' Loaded plugins: fastestmirror, priorities, security Setting up Install Process Determining fastest mirrors Error: Cannot find a valid baseurl for repo: extras Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=extras error was 14: PYCURL ERROR 56 - "Failure when receiving data from the peer"
- environmental problem
Could not fetch URL https://pypi.python.org/simple/: There was a problem confirming the ssl certificate: <urlopen error _ssl.c:477: The handshake operation timed out> Will skip URL https://pypi.python.org/simple/ when looking for download links for pip in ./virtualenv/lib/python2.6/site-packages Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement pip in ./virtualenv/lib/python2.6/site-packages 2015-09-28T10:40:48.635 Downloading/unpacking pip Cleaning up... No distributions at all found for pip in ./virtualenv/lib/python2.6/site-packages
- environmental problem
Updated by Loïc Dachary over 8 years ago
ceph-deploy¶
./virtualenv/bin/teuthology-suite --machine-type vps --suite ceph-deploy --suite-branch firefly --ceph firefly-backports --filter=ubuntu_14,centos_6 --email ncutler@suse.cz
- red http://pulpito.ceph.com/loic-2015-09-27_01:08:08-ceph-deploy-firefly-backports---basic-vps
- four of the five test failures are attributable to #13269 and can be ignored as environmental noise
- in the fifth failure, calls to "osd tier add ... --force-nonempty" in test/librados/tier.cc are failing with -39 (ENOTEMPTY - Directory Not Empty); rescheduled the test at http://pulpito.ceph.com/smithfarm-2015-09-29_02:24:57-ceph-deploy-firefly-backports---basic-vps/
Updated by Nathan Cutler over 8 years ago
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5050
- + Pull request 5236
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case
- | + TestPGLog: fix noop log proc_replica_log test case
- | + TestPGLog: add test for 11358
- | + PGLog::proc_replica_log: handle split out overlapping entries
- + Pull request 5360
- + Pull request 5407
- |\
- | + osd: pg_interval_t::check_new_interval should not rely on pool.min_size to determine if the PG was active
- | + osd: Move IsRecoverablePredicate/IsReadablePredicate to osd_types.h
- | + common/hobject.cc: explicitly typecast p.value_.get_int()
- | + ceph_objectstore_tool.cc: change ghobject_t::NO_SHARD to shard_id_t::NO_SHARD
- | + osd: replace shard_t usage with shard_id_t
- | + osd: explicit shard_id_t() and NO_SHARD
- | + osd: loop over uint8_t instead of shard_id_t
- | + osd: factorize shard_id_t/shard_t into a struct
- | + common: WRITE_{EQ,CMP}_OPERATORS_1
- + Pull request 5526
- |\
- | + osd/OSDMap: handle incrementals that modify+del pool
- + Pull request 5529
- |\
- | + common/syncfs: fall back to sync(2) if syncfs(2) not available
- | + sync_filesystem.h: fix unreachable code
- | + mon, os: check the result of sync_filesystem.
- | + common: Don't call ioctl(BTRFS_IOC_SYNC) in sync_filesystem.
- | + common: Directly return the result of syncfs().
- + Pull request 5532
- |\
- | + Fix casing of Content-Type header
- | + rgw: we should not overide Swift sent content type
- | + rgw: send Content-Length in response for GET on Swift account.
- | + rgw: enforce Content-Type in Swift responses.
- | + rgw: force content_type for swift bucket stats request
- | + rgw: force content-type header for swift account responses without body
- | + rgw: shouldn't return content-type: application/xml if content length is 0
- + Pull request 5541
- + Pull request 5619
- + Pull request 5698
- + Pull request 5726
- + Pull request 5729
- + Pull request 5730
- + Pull request 5813
- + Pull request 5814
- + Pull request 5815
- |\
- | + osd: Keep a reference count on Connection while calling send_message()
- + Pull request 5820
- + Pull request 5822
- |\
- | + WBThrottle::clear_object: signal if we cleared an object
- + Pull request 5823
- |\
- | + OSD: add scrub_finalize_wq suicide timeout
- | + OSD: add scrub_wq suicide timeout
- | + OSD: add op_wq suicide timeout
- | + OSD: add remove_wq suicide timeout
- | + OSD: add snap_trim_wq suicide timeout
- | + OSD: add recovery_wq suicide timeout
- | + OSD: add command_wq suicide timeout
- + Pull request 5831
- + Pull request 5988
- + Pull request 5991
- + Pull request 5992
- + Pull request 5997
- |\
- | + rgw: use strict_strtoll() for content length
- + Pull request 6010
- + Pull request 6087
- + Pull request 6091
- + Pull request 6203
- + Pull request 6207
- + Pull request 6325
Updated by Nathan Cutler over 8 years ago
rados¶
Latest firefly-backports contains a fix for the valgrind issue. Verifying with:
./virtualenv/bin/teuthology-suite --priority 1000 --suite rados/verify --filter valgrind --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-10-12_02:27:44-rados:verify-firefly-backports---basic-multi/
Also scheduled a full rados run:
./virtualenv/bin/teuthology-suite --priority 1000 --subset 7/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-10-12_02:31:17-rados-firefly-backports---basic-multi/
- 1 valgrind failure fixed by https://github.com/ceph/ceph/pull/6325
filter='rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/mon_recovery.yaml validater/valgrind.yaml}' teuthology-suite --priority 101 --filter="$filter" --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
Updated by Nathan Cutler over 8 years ago
rgw¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 7/18
- fail http://pulpito.ceph.com/smithfarm-2015-10-12_02:33:22-rgw-firefly-backports---basic-multi/
- valgrind failure fixed by https://github.com/ceph/ceph/pull/6325
- rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_swift.yaml validater/lockdep.yaml}
- rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/erasure-coded.yaml tasks/rgw_s3tests_multiregion.yaml validater/valgrind.yaml}
- valgrind failure fixed by https://github.com/ceph/ceph/pull/6325
run=smithfarm-2015-10-12_02:33:22-rgw-firefly-backports---basic-multi paddles=paddles.front.sepia.ceph.com eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') teuthology-suite --priority 101 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"
Updated by Nathan Cutler over 8 years ago
fs¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 7/18
- fail http://pulpito.ceph.com/smithfarm-2015-10-12_02:34:59-fs-firefly-backports-testing-basic-multi/
- 'sudo rm
rf -/home/ubuntu/cephtest/mnt.0/client.0/tmp' - valgrind failure fixed by https://github.com/ceph/ceph/pull/6325
- fs/verify/{clusters/fixed-3.yaml debug/mds_client.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-3.yaml debug/mds_client.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
- 'sudo rm
run=smithfarm-2015-10-12_02:34:59-fs-firefly-backports-testing-basic-multi paddles=paddles.front.sepia.ceph.com eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') teuthology-suite --priority 101 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"
- fail http://pulpito.ceph.com/loic-2015-10-20_21:19:19-fs-firefly-backports-testing-basic-multi
- two passed, one failed : environmental noise
run=loic-2015-10-20_21:19:19-fs-firefly-backports-testing-basic-multi paddles=paddles.front.sepia.ceph.com eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') teuthology-suite --priority 101 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"
Updated by Nathan Cutler over 8 years ago
rbd¶
./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 7/18
- fail http://pulpito.ceph.com/smithfarm-2015-10-12_02:36:15-rbd-firefly-backports---basic-multi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dab77fb3e454565e3994d004080189436dc7c511 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dab77fb3e454565e3994d004080189436dc7c511 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/qemu-iotests.sh'
- 'mkdir
run=smithfarm-2015-10-12_02:36:15-rbd-firefly-backports---basic-multi paddles=paddles.front.sepia.ceph.com eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') teuthology-suite --priority 101 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"
- fail http://pulpito.ceph.com/loic-2015-10-21_01:20:10-rbd-firefly-backports---basic-multi
- known bug qemu workunit refers to apt-mirror.front.sepia.ceph.com 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c2a0476bad08d3ea42f6d2742bbb330463fa1b TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/qemu-iotests.sh'
- known bug qemu workunit refers to apt-mirror.front.sepia.ceph.com 'mkdir
Updated by Nathan Cutler over 8 years ago
powercycle¶
./virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz
Updated by Nathan Cutler over 8 years ago
upgrade¶
./virtualenv/bin/teuthology-suite --machine-type vps --suite upgrade/firefly --filter=ubuntu_14.04,centos_6 --suite-branch firefly --ceph firefly-backports
- red http://pulpito.ceph.com/smithfarm-2015-10-12_02:57:09-upgrade:firefly-firefly-backports---basic-vps/
- 13 failures, e.g.:
- environmental problem
GPG key retrieval failed: [Errno 14] PYCURL ERROR 7 - "Failed to connect to 2607:f298:6050:51f3:f816:3eff:fe62:31d3: Network is unreachable"
- monthrash failure in
description: upgrade:firefly/newer/{4-finish-upgrade.yaml 0-cluster/start.yaml 1-install/v0.80.6.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 5-final/{monthrash.yaml osdthrash.yaml rbd.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
Rescheduled failed jobs:
- red http://pulpito.ceph.com/smithfarm-2015-10-12_07:19:37-upgrade:firefly-firefly-backports---basic-vps/
- only 1 failure, in
upgrade:firefly/newer/{4-finish-upgrade.yaml 0-cluster/start.yaml 1-install/v0.80.8.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final/{monthrash.yaml osdthrash.yaml rbd.yaml testrgw.yaml} distros/centos_6.5.yaml}
- only 1 failure, in
2015-10-12T07:51:53.544 INFO:tasks.ceph:Starting mon daemons... 2015-10-12T07:51:53.545 INFO:tasks.ceph.mon.a:Restarting daemon 2015-10-12T07:51:53.545 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a' 2015-10-12T07:51:53.624 INFO:tasks.ceph.mon.a:Started 2015-10-12T07:51:53.625 INFO:tasks.ceph.mon.b:Restarting daemon 2015-10-12T07:51:53.625 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i b' 2015-10-12T07:51:53.656 INFO:tasks.ceph.mon.b:Started 2015-10-12T07:51:53.656 INFO:tasks.ceph.mon.c:Restarting daemon 2015-10-12T07:51:53.657 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i c' 2015-10-12T07:51:53.685 INFO:tasks.ceph.mon.c:Started 2015-10-12T07:51:53.685 INFO:tasks.ceph:Starting osd daemons... 2015-10-12T07:51:53.686 INFO:tasks.ceph.osd.0:Restarting daemon 2015-10-12T07:51:53.686 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 0' 2015-10-12T07:51:53.719 INFO:tasks.ceph.osd.0:Started 2015-10-12T07:51:53.719 INFO:tasks.ceph.osd.1:Restarting daemon 2015-10-12T07:51:53.720 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 1' 2015-10-12T07:51:53.757 INFO:tasks.ceph.osd.1:Started 2015-10-12T07:51:53.757 INFO:tasks.ceph.osd.2:Restarting daemon 2015-10-12T07:51:53.757 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 2' 2015-10-12T07:51:53.786 INFO:tasks.ceph.osd.2:Started 2015-10-12T07:51:53.786 INFO:tasks.ceph.osd.3:Restarting daemon 2015-10-12T07:51:53.786 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3' 2015-10-12T07:51:53.806 INFO:tasks.ceph.osd.0.vpm103.stdout:starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal 2015-10-12T07:51:53.816 INFO:tasks.ceph.osd.3:Started 2015-10-12T07:51:53.816 INFO:tasks.ceph.osd.4:Restarting daemon 2015-10-12T07:51:53.817 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 4' 2015-10-12T07:51:53.823 INFO:tasks.ceph.osd.0.vpm103.stderr:2015-10-12 10:51:53.806695 7fd7efe8a7a0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2015-10-12T07:51:53.846 INFO:tasks.ceph.osd.4:Started 2015-10-12T07:51:53.846 INFO:tasks.ceph.osd.5:Restarting daemon 2015-10-12T07:51:53.847 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 5' 2015-10-12T07:51:53.875 INFO:tasks.ceph.osd.5:Started 2015-10-12T07:51:53.875 INFO:tasks.ceph:Starting mds daemons... 2015-10-12T07:51:53.876 INFO:tasks.ceph.mds.a:Restarting daemon 2015-10-12T07:51:53.876 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f -i a' 2015-10-12T07:51:53.910 INFO:tasks.ceph.mds.a:Started 2015-10-12T07:51:53.910 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph mds set_max_mds 1' 2015-10-12T07:51:53.930 INFO:tasks.ceph.osd.1.vpm103.stdout:starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal 2015-10-12T07:51:53.941 INFO:tasks.ceph.osd.3.vpm056.stdout:starting osd.3 at :/0 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal 2015-10-12T07:51:53.954 INFO:tasks.ceph.osd.1.vpm103.stderr:2015-10-12 10:51:53.938253 7fc118f8d7a0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2015-10-12T07:51:53.963 INFO:tasks.ceph.osd.3.vpm056.stderr:2015-10-12 10:51:52.653244 7f0f1ed657a0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2015-10-12T07:51:53.986 INFO:tasks.ceph.osd.2.vpm103.stdout:starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal 2015-10-12T07:51:54.022 INFO:tasks.ceph.osd.2.vpm103.stderr:2015-10-12 10:51:54.005539 7fc5b894a7a0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2015-10-12T07:51:54.054 INFO:tasks.ceph.osd.4.vpm056.stdout:starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal 2015-10-12T07:51:54.069 INFO:tasks.ceph.osd.4.vpm056.stderr:2015-10-12 10:51:52.759666 7f6a3a1927a0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2015-10-12T07:51:54.086 INFO:tasks.ceph.osd.5.vpm056.stdout:starting osd.5 at :/0 osd_data /var/lib/ceph/osd/ceph-5 /var/lib/ceph/osd/ceph-5/journal 2015-10-12T07:51:54.099 INFO:tasks.ceph.osd.5.vpm056.stderr:2015-10-12 10:51:52.789645 7f0a3f67e7a0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2015-10-12T07:51:54.116 INFO:tasks.ceph.mds.a.vpm103.stdout:starting mds.a at :/0 2015-10-12T07:52:00.135 INFO:tasks.ceph.mds.a.vpm103.stderr:2015-10-12 10:52:00.118600 7f1f833087a0 -1 mds.-1.-1 *** no OSDs are up as of epoch 1, waiting 2015-10-12T07:52:00.865 INFO:teuthology.orchestra.run.vpm103.stderr:max_mds = 1 2015-10-12T07:52:00.888 INFO:tasks.ceph:Waiting until ceph is healthy... 2015-10-12T07:52:00.889 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json' 2015-10-12T07:52:01.426 DEBUG:teuthology.misc:5 of 6 OSDs are up 2015-10-12T07:52:02.426 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json' 2015-10-12T07:52:03.693 DEBUG:teuthology.misc:5 of 6 OSDs are up 2015-10-12T07:52:04.693 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json' 2015-10-12T07:52:05.171 DEBUG:teuthology.misc:6 of 6 OSDs are up 2015-10-12T07:52:05.171 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:05.469 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN 72 pgs peering; 72 pgs stuck inactive; 72 pgs stuck unclean; clock skew detected on mon.a 2015-10-12T07:52:10.136 INFO:tasks.ceph.mds.a.vpm103.stderr:2015-10-12 10:52:10.118924 7f1f833087a0 -1 mds.-1.-1 *** no OSDs are up as of epoch 1, waiting 2015-10-12T07:52:12.469 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:12.785 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:52:19.786 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:20.080 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:52:27.080 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:27.403 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:52:34.403 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:34.722 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:52:41.723 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:42.028 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:52:49.029 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:49.355 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:52:56.356 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:52:56.670 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:03.671 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:04.030 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:11.030 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:11.355 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:18.355 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:18.698 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:25.698 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:26.011 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:33.012 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:33.335 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:40.335 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:40.656 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:47.658 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:47.961 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:53:54.963 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:53:55.286 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:54:02.287 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:54:02.594 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:54:09.596 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:54:09.915 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:54:16.917 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:54:17.242 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:54:24.243 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:54:24.553 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:54:31.555 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:54:32.035 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a 2015-10-12T07:54:39.037 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-10-12T07:54:39.361 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a [many, many more of these]
Rescheduled the one remaining failed test:
Updated by Nathan Cutler over 8 years ago
ceph-deploy¶
./virtualenv/bin/teuthology-suite --machine-type vps --suite ceph-deploy --suite-branch firefly --ceph firefly-backports --filter=ubuntu_14,centos_6 --email ncutler@suse.cz
Updated by Loïc Dachary over 8 years ago
<ircolle> loicd - when do you anticipate releasing 0.80.11? <loicd> ircolle: smithfarm has a better view but it's pretty much whenever we decide it's a good idea to release it <sage> ircolle: i haven't been paying attention.. whenever the backport team thinks it's ready? <sage> yeah <loicd> sage: ack <loicd> ircolle: smithfarm is running the latest tests and it's in good shape <ircolle> well, maybe after alfredodeza catches his breath after 9.1.0? <sage> yeah <loicd> cool
Updated by Loïc Dachary over 8 years ago
Add https://github.com/ceph/ceph/pull/6328 for qemu-iotest
git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 5287
- |\
- | + TestPGLog: fix invalid proc_replica_log test case
- | + TestPGLog: fix noop log proc_replica_log test case
- | + TestPGLog: add test for 11358
- | + PGLog::proc_replica_log: handle split out overlapping entries
- + Pull request 5407
- |\
- | + osd: pg_interval_t::check_new_interval should not rely on pool.min_size to determine if the PG was active
- | + osd: Move IsRecoverablePredicate/IsReadablePredicate to osd_types.h
- | + common/hobject.cc: explicitly typecast p.value_.get_int()
- | + ceph_objectstore_tool.cc: change ghobject_t::NO_SHARD to shard_id_t::NO_SHARD
- | + osd: replace shard_t usage with shard_id_t
- | + osd: explicit shard_id_t() and NO_SHARD
- | + osd: loop over uint8_t instead of shard_id_t
- | + osd: factorize shard_id_t/shard_t into a struct
- | + common: WRITE_{EQ,CMP}_OPERATORS_1
- + Pull request 5526
- |\
- | + osd/OSDMap: handle incrementals that modify+del pool
- + Pull request 5529
- |\
- | + common/syncfs: fall back to sync(2) if syncfs(2) not available
- | + sync_filesystem.h: fix unreachable code
- | + mon, os: check the result of sync_filesystem.
- | + common: Don't call ioctl(BTRFS_IOC_SYNC) in sync_filesystem.
- | + common: Directly return the result of syncfs().
- + Pull request 5532
- |\
- | + Fix casing of Content-Type header
- | + rgw: we should not overide Swift sent content type
- | + rgw: send Content-Length in response for GET on Swift account.
- | + rgw: enforce Content-Type in Swift responses.
- | + rgw: force content_type for swift bucket stats request
- | + rgw: force content-type header for swift account responses without body
- | + rgw: shouldn't return content-type: application/xml if content length is 0
- + Pull request 5815
- |\
- | + osd: Keep a reference count on Connection while calling send_message()
- + Pull request 5823
- |\
- | + OSD: add scrub_finalize_wq suicide timeout
- | + OSD: add scrub_wq suicide timeout
- | + OSD: add op_wq suicide timeout
- | + OSD: add remove_wq suicide timeout
- | + OSD: add snap_trim_wq suicide timeout
- | + OSD: add recovery_wq suicide timeout
- | + OSD: add command_wq suicide timeout
- + Pull request 5997
- |\
- | + rgw: use strict_strtoll() for content length
- + Pull request 6328
- + qa: Use public qemu repo
- + use git://git.ceph.com
Updated by Loïc Dachary over 8 years ago
rbd¶
run=loic-2015-10-21_01:20:10-rbd-firefly-backports---basic-multi paddles=paddles.front.sepia.ceph.com eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') teuthology-suite --priority 101 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"
Updated by Yuri Weinstein over 8 years ago
QE Validation¶
Suite | Runs | Notes/Issues |
rbd | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-22_23:00:01-rbd-firefly-distro-basic-openstack/ | FAILED #11104 |
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:35:38-rbd-firefly-distro-basic-openstack/ | fail in ovh b/c xfstests can take 6+ hrs | |
http://pulpito.ceph.com/teuthology-2015-10-27_09:38:04-rbd-firefly-distro-basic-multi/ | rerun on baremetal | |
http://pulpito.ceph.com/teuthology-2015-11-12_17:03:29-rbd-firefly-distro-basic-vps/ | rerun on centos wip-13622-fix-wusui fix for #11104 | |
http://pulpito.ceph.com/teuthology-2015-11-12_20:15:04-rbd-firefly-distro-basic-multi/ | rerun on centos wip-13622-fix-wusui fix for #11104 in sepia, killed see jdillaman comment in https://github.com/ceph/teuthology/pull/704 | |
rgw | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-23_23:02:01-rgw-firefly-distro-basic-openstack/ | FAILED #11104 |
http://pulpito.ceph.com/teuthology-2015-10-27_09:52:18-rgw-firefly-distro-basic-multi/ | ||
http://pulpito.ceph.redhat.com/teuthology-2015-11-12_20:10:28-rgw-firefly-distro-basic-magna/ | rerun in octo/rhel wip-13622-fix-wusui fix for #11104 (suspect octo nodes setup for failures) | |
http://pulpito.ceph.com/teuthology-2015-11-12_21:24:06-rgw-firefly-distro-basic-multi/ | rerun in sepia/centos wip-13622-fix-wusui fix for #11104 | |
fs | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-23_23:04:01-fs-firefly-distro-basic-openstack/ | FAILED #11104, #13630 |
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:43:12-fs-firefly-distro-basic-openstack/ | ran 6+ h | |
http://pulpito.ceph.com/teuthology-2015-10-27_10:24:08-fs-firefly-distro-basic-multi/ | rerun in sepia | |
http://pulpito.ceph.redhat.com/teuthology-2015-11-13_00:03:41-fs-firefly-distro-basic-magna/ | rerun in octo/rhel wip-13622-fix-wusui fix for #11104 | |
http://pulpito.ceph.com/teuthology-2015-11-12_21:16:20-fs-firefly-distro-basic-multi/ | rerun in sepia/centos wip-13622-fix-wusui fix for #11104 | |
samba | http://pulpito.ceph.com/teuthology-2015-10-27_10:44:28-samba-firefly---basic-vps/ | FAILED same as in firefly v0.80.10 #11090, #6613 |
hadoop | http://pulpito.ceph.com/teuthology-2015-10-14_09:45:17-hadoop-hammer---basic-multi/ | PASSED |
http://pulpito.ceph.com/teuthology-2015-11-02_14:52:01-hadoop-hammer---basic-multi/ | ||
rest | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-31_18:16:02-rest-hammer---basic-openstack/ | PASSED |
ceph-deploy(distros) | http://pulpito.ceph.com/teuthology-2015-10-27_13:25:44-ceph-deploy-firefly-distro-basic-vps/ | FAILED #13367 |
upgrade/dumpling-x (to firefly)(distros) | http://pulpito.ceph.com/teuthology-2015-10-25_19:13:08-upgrade:dumpling-x-firefly-distro-basic-vps/ | PASSED |
upgrade/dumpling-firefly-x (to giant) | http://pulpito.ceph.com/teuthology-2015-10-24_18:15:08-upgrade:dumpling-firefly-x-giant-distro-basic-vps/ | #13300 DEPRECATED AS UNSUPPORTED UPGRADE PATH |
upgrade/dumpling-firefly-x (to hammer)(distros) | this suite runs out of memory on VPS and unrunnable | |
upgrade/firefly-x (to giant)(distros) | http://pulpito.ceph.com/teuthology-2015-10-24_18:18:02-upgrade:firefly-x-giant-distro-basic-vps/ | #13300 DEPRECATED AS UNSUPPORTED UPGRADE PATH |
upgrade/firefly-x (to hammer)(distros) | http://pulpito.ceph.com/teuthology-2015-10-23_17:18:01-upgrade:firefly-x-hammer-distro-basic-vps/ | PASSED ALMOST missing packadge for v0.80.4 #13788, #11104, #13632 |
http://pulpito.ceph.com/teuthology-2015-10-26_14:05:59-upgrade:firefly-x-hammer-distro-basic-vps/ | filtered in ubuntu, b/c #11104 | |
http://pulpito.ceph.com/teuthology-2015-11-12_17:21:52-upgrade:firefly-x-hammer-distro-basic-vps/ | rerun on centos/wip-13622-fix-wusui fix for #11104 | |
FAILED / PASSED | ||
Email from Tamil and Yuri on status of this release:
From: Yuri Weinstein <yweinste@redhat.com> Date: Mon, Nov 16, 2015 at 8:16 AM Subject: Re: regd: ceph v0.80.11 release To: Tamil Muthamizhan <tmuthami@redhat.com> Cc: Ian Colle <icolle@redhat.com>, Warren Usui <wusui@redhat.com>, Sage Weil <sweil@redhat.com> Tamil, agree. In last several days we refocused more on testing #11104 and related issues. Just to update: I ran several smoke tests over weekend on CentOS - http://pulpito.ceph.com/teuthology-2015-11-14_08:30:20-smoke-firefly-distro-basic-vps/ (with os CentOS) http://pulpito.ceph.redhat.com/teuthology-2015-11-13_20:02:11-smoke-firefly---basic-magna/ (rhel) http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-11-14_19:43:44-smoke-firefly-distro-basic-openstack/ (ovh CentOS) They did not all pass, but we run smoke only on master, so I think it'd be a wrong to expect them to pass on vps for example fully and probably is not worthwhile to make the smoke suite run on firefly (Sage?). But they were ran to test the fix for #11104 (wip-13622-fix-wusui) rgw and fs test forced on baremetal in sepia simply timed out waiting for CentoS nodes. However, upgrade/firefly-x (to hammer) passed on vps (one minor issue) and powercycle passed (two job still waiting for nodes) including on CentOS nodes. I hope it gives enough data points for accepting v0.80.11 for downstream release. ? Thx YuriW On Fri, Nov 13, 2015 at 3:53 PM, Tamil Muthamizhan <tmuthami@redhat.com> wrote: > Adding Sage in this email thread. > > Thanks Yuri, I looked into the logs and we have tests passing for rados/rbd/rgw - good enough for regression in firefly on centos 7.0 and rhel 7.1 > > > ----- Original Message ----- > From: "Yuri Weinstein" <yweinste@redhat.com> > To: "Tamil Muthamizhan" <tmuthami@redhat.com> > Cc: "Ian Colle" <icolle@redhat.com>, "Warren Usui" <wusui@redhat.com>, "Ken Dreyer" <kdreyer@redhat.com>, "Jason Dillaman" <dillaman@redhat.com> > Sent: Friday, November 13, 2015 1:32:50 PM > Subject: Re: regd: ceph v0.80.11 release > > Only failed tests were scheduled to rerun (many on CentOS) on > wip-13622-fix-wusui and in addition two smoke suites are running (on > CentOS) as requested: > > http://pulpito.ceph.redhat.com/teuthology-2015-11-13_15:34:29-smoke-firefly-distro-basic-magna/ > http://pulpito.ceph.com/teuthology-2015-11-13_12:31:06-smoke-firefly-distro-basic-vps/ > (we don't run smoke on vps, so errors may be related to this fact) > > Details are in http://tracker.ceph.com/issues/11644 > Here is the gist (focus on runs related to #11104 fix): > > rbd - has issues as was mentioned before > > rgw - waiting for centos nodes in sepia > > fs - waiting for centos nodes in sepia > > upgrade/firefly-x (to hammer) - passed, one minor missing package issue > > powercycle - full rerun (as centos was filtered out before to avoid > the 11104 issue), running, couple of errors looks like env noise so > far, otherwise ok > > Thx > YuriW
Updated by Loïc Dachary over 8 years ago
- Status changed from In Progress to Resolved