Posting subjob result to master failed because master is unresponsive.
[2017-10-06 15:23:10.198] 5452 DEBUG Bld20637-Sub221 git Command completed with exit code 0.
[2017-10-06 15:24:59.538] 5452 ERROR Bld20637-Sub222 unhandled_excep Unhandled exception handler caught exception.
Traceback (most recent call last):
File "/home/jenkins/.virtualenvs/clusterrunner/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 319, in _make_request
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jenkins/ClusterRunnerBuild/app/util/safe_thread.py", line 18, in run
File "/usr/local/lib/python3.4/threading.py", line 868, in run
File "/home/jenkins/ClusterRunnerBuild/app/slave/cluster_slave.py", line 313, in _execute_subjob
File "/home/jenkins/ClusterRunnerBuild/app/util/decorators.py", line 38, in function_with_retries
File "/home/jenkins/ClusterRunnerBuild/app/util/network.py", line 62, in post
File "/home/jenkins/ClusterRunnerBuild/app/util/network.py", line 150, in _request
File "/home/jenkins/.virtualenvs/clusterrunner/lib/python3.4/site-packages/requests/sessions.py", line 456, in request
File "/home/jenkins/.virtualenvs/clusterrunner/lib/python3.4/site-packages/requests/sessions.py", line 559, in send
File "/home/jenkins/.virtualenvs/clusterrunner/lib/python3.4/site-packages/requests/adapters.py", line 384, in send
requests.exceptions.Timeout: HTTPConnectionPool(host='pod4101-automation1024.pod.box.net', port=43000): Read timed out. (read timeout=120)
In this case the slave service actually shuts down because of the Timeout exception, but since the master was still unresponsive the call from the slave to the master to notify it about shutdown also failed. The master still thinks this node is online.
There are a few situations where a slave node can get "stuck" -- it will get into a state where it will never finish what it's doing, but it also won't be marked offline so any external capacity monitoring or alerting around offline nodes will not report accurately.
The situations we've seen this occur are:
Could not post idle notification to master because master is unresponsive.
This could happen for different reasons: transient network issues, the cluster master API server tied up by a large download or upload, etc.
Posting subjob result to master failed because master is unresponsive.
In this case the slave service actually shuts down because of the Timeout exception, but since the master was still unresponsive the call from the slave to the master to notify it about shutdown also failed. The master still thinks this node is online.
Setup failed due to full disk space.
I suspect that this error can also occur without recording the exception in the logs. If disk space fills up completely then the OSError exception will not be able to be written to the log.
In all of these cases, the master still thinks the slave node is doing real work, even though the slave service will never recover. When this happens, the cluster is operating at reduced capacity but it's almost impossible to detect externally via the API.
Currently the only way we can tell that this has happened is that our dashboard shows these nodes as busy, but the builds they're supposedly working on have completed. From the API, this looks like the
/slaveendpoint listing a node as working on build X, but the/queueendpoint does not show build X as in-progress (usually because other nodes finished all its subjobs). I believe that some of the above errors could also lead to builds appearing to get stuck forever.