2015-02-11 78 views
1

我正在開始在谷歌雲計算引擎上運行Spark計算引擎,並使用bdutil部署(在GoogleCloudPlatform github上),我是這樣做如下:使用bdutil在現有的GCE hadoop/spark集羣中添加或刪除節點

./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket deploy 

我期待我可能要開始用2個節點的集羣(如默認值),後來想補充另一個工作節點,以應付需要是一項艱鉅的任務跑。如果可能的話,我希望在不完全銷燬和重新部署集羣的情況下這樣做。

我試過用不同數量的節點重新部署,或者運行「create」和「run_command_group install_connectors」,但是對於其中的每一個,我都會收到關於已經存在的節點的錯誤,例如

./bdutil -n 3 -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket deploy 

./bdutil -n 3 -b myhdfsbucket create 
./bdutil -n 3 -t workers -b myhdfsbucket run_command_group install_connectors 

我也試過快照和克隆一個工人已經運行,但不是所有的服務似乎正常啓動,我留下了一點我的深度存在。

有關如何可以/應該從現有羣集添加和/或刪除節點的任何指導?

回答

3

更新: 我們添加了resize_env.sh到基礎bdutil repo所以你不需要去我對它的叉了

原來的答覆:

沒有官方支持調整bdutil部署集羣的大小,但這肯定是我們之前討論過的,實際上它可以爲調整大小提供一些基本支持。一旦合併到主分支中,這可能會採取不同的形式,但我已將第一個調整大小支持草稿推送到my fork of bdutil。這是通過兩個提交來實現的;一個允許skipping all "master" operations(包括創建,run_command,刪除等)和另一個到add the resize_env.sh file

我還沒有對其他bdutil擴展的所有組合進行測試,但我至少已成功運行它與基地bdutil_env.shextensions/spark/spark_env.sh。理論上它應該適用於你的bigquery和數據存儲擴展。要在您的情況下使用它:

# Assuming you initially deployed with this command (default n == 2) 
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 deploy 

# Before this step, edit resize_env.sh and set NEW_NUM_WORKERS to what you want. 
# Currently it defaults to 5. 
# Deploy only the new workers, e.g. {hadoop-w-2, hadoop-w-3, hadoop-w-4}: 
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 -e resize_env.sh deploy 

# Explicitly start the Hadoop daemons on just the new workers: 
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 -e resize_env.sh run_command -t workers -- "service hadoop-hdfs-datanode start && service hadoop-mapreduce-tasktracker start" 

# If using Spark as well, explicitly start the Spark daemons on the new workers: 
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 -e resize_env.sh run_command -t workers -u extensions/spark/start_single_spark_worker.sh -- "./start_single_spark_worker.sh" 

# From now on, it's as if you originally turned up your cluster with "-n 5". 
# When deleting, remember to include those extra workers: 
./bdutil -b myhdfsbucket -n 5 delete 

一般來說,最佳實踐的建議是凝結配置成一個文件,而不是總是傳遞標誌。例如,你的情況,你可能要一個名爲my_base_env.sh文件:

import_env bigquery_env.sh 
import_env datastore_env.sh 
import_env extensions/spark/spark_env.sh 

NUM_WORKERS=2 
CONFIGBUCKET=myhdfsbucket 

然後調整大小命令要短得多:

# Assuming you initially deployed with this command (default n == 2) 
./bdutil -e my_base_env.sh deploy 

# Before this step, edit resize_env.sh and set NEW_NUM_WORKERS to what you want. 
# Currently it defaults to 5. 
# Deploy only the new workers, e.g. {hadoop-w-2, hadoop-w-3, hadoop-w-4}: 
./bdutil -e my_base_env.sh -e resize_env.sh deploy 

# Explicitly start the Hadoop daemons on just the new workers: 
./bdutil -e my_base_env.sh -e resize_env.sh run_command -t workers -- "service hadoop-hdfs-datanode start && service hadoop-mapreduce-tasktracker start" 

# If using Spark as well, explicitly start the Spark daemons on the new workers: 
./bdutil -e my_base_env.sh -e resize_env.sh run_command -t workers -u extensions/spark/start_single_spark_worker.sh -- "./start_single_spark_worker.sh" 

# From now on, it's as if you originally turned up your cluster with "-n 5". 
# When deleting, remember to include those extra workers: 
./bdutil -b myhdfsbucket -n 5 delete 

最後,這是不太100%一樣,如果你'd最初部署了集羣-n 5;在這種情況下,主節點/home/hadoop/hadoop-install/conf/slaves/home/hadoop/spark-install/conf/slaves上的文件將丟失新節點。如果您打算使用/home/hadoop/hadoop-install/bin/[stop|start]-all.sh/home/hadoop/spark-install/sbin/[stop|start]-all.sh,則可以手動將SSH連接到主節點並編輯這些文件以將新節點添加到列表;如果沒有,那麼就不需要更改這些從屬文件。

+0

太棒了!你的叉子仍然可用嗎?只是想知道在將現有的bdutil羣集添加到現有的bdutil羣集中並添加新的磁盤之前,最簡單的選擇是什麼。 – 2016-02-01 12:13:50

+0

實際上,我們將'resize_env.sh'添加到[base bdutil repo](https://github.com/GoogleCloudPlatform/bdutil/blob/master/extensions/google/experimental/resize_env.sh),所以你不用'我不再需要去看它了。 – 2016-02-01 20:16:19