Bug #21491
closedOVH slave java.io.EOFException error
0%
Description
Creating this bug to track debugging of this problem.
Example output:
Building remotely on 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 (libvirt python3 ceph_ansible_pr_zesty vagrant) in workspace /home/jenkins-build/build/workspace/ceph-ansible-prs-luminous-ansible2.3-bluestore_dmcrypt_journal FATAL: java.io.IOException: Unexpected termination of the channel java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2638) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3113) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:349) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:48) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59) Caused: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:73) Caused: hudson.remoting.RequestAbortedException at hudson.remoting.Request.abort(Request.java:327) at hudson.remoting.Channel.terminate(Channel.java:980) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:92) at ......remote call to 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0(Native Method) at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1647) at hudson.remoting.Request.call(Request.java:190) at hudson.remoting.Channel.call(Channel.java:895) at hudson.FilePath.act(FilePath.java:987) at hudson.FilePath.act(FilePath.java:976) at org.jenkinsci.plugins.gitclient.Git.getClient(Git.java:137) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:756) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:747) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1109) at hudson.scm.SCM.checkout(SCM.java:495) at hudson.model.AbstractProject.checkout(AbstractProject.java:1212) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:566) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:491) at hudson.model.Run.execute(Run.java:1724) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:419) [PostBuildScript] - Execution post build scripts. ERROR: Build step failed with exception java.lang.NullPointerException: no workspace from node hudson.slaves.DumbSlave[167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0] which is computer hudson.slaves.SlaveComputer@46526cb4 and has channel null at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:88) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66) at org.jenkinsci.plugins.postbuildscript.PostBuildScript.processBuildSteps(PostBuildScript.java:204) at org.jenkinsci.plugins.postbuildscript.PostBuildScript.processScripts(PostBuildScript.java:143) at org.jenkinsci.plugins.postbuildscript.PostBuildScript._perform(PostBuildScript.java:105) at org.jenkinsci.plugins.postbuildscript.PostBuildScript.perform(PostBuildScript.java:85) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736) at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:682) at hudson.model.Build$BuildExecution.post2(Build.java:186) at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:627) at hudson.model.Run.execute(Run.java:1749) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:419) Build step 'Execute a set of scripts' marked build as failure Setting status of a069a6fe632100bd478e66b9500205170bd917e3 to FAILURE with url https://2.jenkins.ceph.com/job/ceph-ansible-prs-luminous-ansible2.3-bluestore_dmcrypt_journal/579/ and message: 'FAIL - luminous-ansible2.3-bluestore_dmcrypt_journal ' Using context: Testing: luminous-ansible2.3-bluestore_dmcrypt_journal Finished: FAILURE
I haven't been able to capture master or slave logs when this occurs yet.
Updated by David Galloway over 6 years ago
- Status changed from New to In Progress
For comparison, here is an intentional slave disconnection (via openstack server delete
command)
FATAL: command execution failed java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2638) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3113) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:349) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:48) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59) Caused: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:73) Caused: java.io.IOException: Backing channel '167.114.241.86+ceph_ansible_pr_zesty__c29da6d3-be11-447a-aaf4-d700af447ee4' is disconnected. at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:193) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:260) at com.sun.proxy.$Proxy65.isAlive(Unknown Source) at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1138) at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1130) at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66) at org.jenkinsci.plugins.postbuildscript.PostBuildScript.processBuildSteps(PostBuildScript.java:204) at org.jenkinsci.plugins.postbuildscript.PostBuildScript.processScripts(PostBuildScript.java:143) at org.jenkinsci.plugins.postbuildscript.PostBuildScript._perform(PostBuildScript.java:105) at org.jenkinsci.plugins.postbuildscript.PostBuildScript.perform(PostBuildScript.java:85) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736) at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:682) at hudson.model.Build$BuildExecution.post2(Build.java:186) at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:627) at hudson.model.Run.execute(Run.java:1749) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:419) Build step 'Execute a set of scripts' marked build as failure Setting status of eba4968a36e8841af0f75ba19e09e78f7eaa81e2 to FAILURE with url https://2.jenkins.ceph.com/job/ceph-ansible-prs-luminous-ansible2.3-docker_cluster/522/ and message: 'FAIL - luminous-ansible2.3-docker_cluster ' Using context: Testing: luminous-ansible2.3-docker_cluster Finished: FAILURE
Updated by David Galloway over 6 years ago
OK it took me forever to find a time on the job anywhere but it was glaringly large after searching.
For: https://2.jenkins.ceph.com/job/ceph-ansible-prs-luminous-ansible2.3-bluestore_dmcrypt_journal/579/
Job started at Sep 21, 2017 8:13:13 AM
In /var/log/syslog
:
/var/log/syslog.1:Sep 21 08:04:54 jenkins2 gunicorn_pecan[28343]: 2017-09-21 08:04:54,978 [INFO ] [mita.providers.openstack][MainThread] created node: <Node: uuid=f63a59920e87a182532b98b7e3f310f222f88849, name=ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0, state=PENDING, public_ips=[], private_ips=[], provider=OpenStack ...> /var/log/syslog.1:Sep 21 08:13:40 jenkins2 gunicorn_pecan[28343]: 2017-09-21 08:13:40,635 [INFO ] [mita.controllers.nodes][MainThread] Marking cee5ccaa-8615-4159-a3b5-c8133414e4a0 as active. /var/log/syslog.1:Sep 21 08:15:40 jenkins2 gunicorn_pecan[28343]: 2017-09-21 08:15:40,656 [INFO ] [mita.controllers.nodes][MainThread] Marking cee5ccaa-8615-4159-a3b5-c8133414e4a0 as active. /var/log/syslog.1:Sep 21 08:55:16 jenkins2 gunicorn_pecan[28343]: 2017-09-21 08:55:16,615 [INFO ] [mita.controllers.nodes][MainThread] [jenkins] removing node: 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 /var/log/syslog.1:Sep 21 08:55:16 jenkins2 gunicorn_pecan[28343]: 2017-09-21 08:55:16,699 [INFO ] [mita.controllers.nodes][MainThread] [cloud] destroying node: ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0
From mita logs:
root@jenkins2:/var/log/celery# grep cee5ccaa-8615-4159-a3b5-c8133414e4a0 mita-celery.service.log [2017-09-21 08:13:10,298: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,299: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,300: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,301: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,301: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,302: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,302: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,303: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,304: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,304: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,305: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,305: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,306: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,306: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,307: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,308: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,308: INFO/ForkPoolWorker-7] reason was: 167.114.228.255+centos7_small__ab5a98cd-1450-495c-840d-4629789554a1 doesn’t have label vagrant&&libvirt; 167.114.228.62+centos7_small__873d7cf6-b138-44ee-bef9-ecc10931fa41 doesn’t have label vagrant&&libvirt; 167.114.230.23+centos7_small__b42b5b70-f577-4842-a9d1-a0e2f95085ca doesn’t have label vagrant&&libvirt; 167.114.230.26+centos7_small__e3a59946-3111-4296-a029-8a6040d12253 doesn’t have label vagrant&&libvirt; 167.114.230.47+trusty_small__d113ec18-2c1c-4337-9c06-64f3f750260e doesn’t have label vagrant&&libvirt; 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is offline; 167.114.244.110+ceph_ansible_pr_zesty__3ed48f2c-b242-48cc-8abe-57ab0c4f5feb is offline; 167.114.244.126+ceph_ansible_pr_zesty__c07dcbf3-6b3e-4a65-a24e-623d3512815a is offline; Executor slot already in use; Jenkins doesn’t have label vagrant&&libvirt [2017-09-21 08:13:10,620: INFO/ForkPoolWorker-2] found an idle node: 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 [2017-09-21 08:13:40,628: INFO/ForkPoolWorker-5] 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is not idle, reset node.idle_since [2017-09-21 08:14:10,631: INFO/ForkPoolWorker-3] found an idle node: 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 [2017-09-21 08:14:40,741: INFO/ForkPoolWorker-5] found an idle node: 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 [2017-09-21 08:15:10,633: INFO/ForkPoolWorker-7] found an idle node: 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 [2017-09-21 08:15:40,652: INFO/ForkPoolWorker-8] 167.114.243.194+ceph_ansible_pr_zesty__cee5ccaa-8615-4159-a3b5-c8133414e4a0 is not idle, reset node.idle_since
This sort of leads me to believe the job is started (or attempts to start) on the node before it's marked as active by mita..?
Updated by David Galloway over 6 years ago
Nevermind. Marking "active" just resets idle_since in the db.
Updated by David Galloway over 2 years ago
- Status changed from In Progress to Closed
This was only happening with ephemeral Jenkins builders that we don't use anymore.