0
我試圖在Capistrano完成部署後立即重新啓動Python,gunicorn和spark,但我得到了下面的錯誤。但是,當我試圖通過SSH在服務器上執行這些命令,那麼它工作正常。啓動Python,Spark和Gunicorn時出現問題
功能在deploy.rb:
desc 'Restart django'
task :restart_django do
on roles(:django), in: :sequence, wait: 5 do
within "#{fetch(:deploy_to)}/current/" do
execute "cd #{fetch(:deploy_to)}/current/ && source bin/activate "
execute "sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python && pkill -f gunicorn && pkill -f spark"
#execute " cd /home/ubuntu/code/spark-2.1.0-bin-hadoop2.7/sbin/ && ./start-master.sh && ./start-slave.sh spark://127.0.0.1:7077;"
#execute "sleep 20"
#execute "cd /home/ubuntu/code/ && nohup gunicorn example.wsgi:application --name example --workers 4 &"
end
end
end
部署輸出:
cap dev deploy:restart_django
Using airbrussh format.
Verbose output is being written to log/capistrano.log.
00:00 deploy:restart_django
01 cd /home/ubuntu/code/ && source bin/activate
✔ 01 [email protected] 2.109s
02 sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as [email protected]: sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark exit status: 1
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stdout: Stopping supervisor: supervisord.
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stderr: Nothing written
SSHKit::Command::Failed: sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark exit status: 1
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stdout: Stopping supervisor: supervisord.
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stderr: Nothing written
Tasks: TOP => deploy:restart_django
(See full trace by running task with --trace)