更新: 我們添加了resize_env.sh到基礎bdutil repo所以你不需要去我對它的叉了
原來的答覆:
沒有官方支持調整bdutil部署集羣的大小,但這肯定是我們之前討論過的,實際上它可以爲調整大小提供一些基本支持。一旦合併到主分支中,這可能會採取不同的形式,但我已將第一個調整大小支持草稿推送到my fork of bdutil。這是通過兩個提交來實現的;一個允許skipping all "master" operations(包括創建,run_command,刪除等)和另一個到add the resize_env.sh
file。
我還沒有對其他bdutil擴展的所有組合進行測試,但我至少已成功運行它與基地bdutil_env.sh
和extensions/spark/spark_env.sh
。理論上它應該適用於你的bigquery和數據存儲擴展。要在您的情況下使用它:
# Assuming you initially deployed with this command (default n == 2)
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 deploy
# Before this step, edit resize_env.sh and set NEW_NUM_WORKERS to what you want.
# Currently it defaults to 5.
# Deploy only the new workers, e.g. {hadoop-w-2, hadoop-w-3, hadoop-w-4}:
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 -e resize_env.sh deploy
# Explicitly start the Hadoop daemons on just the new workers:
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 -e resize_env.sh run_command -t workers -- "service hadoop-hdfs-datanode start && service hadoop-mapreduce-tasktracker start"
# If using Spark as well, explicitly start the Spark daemons on the new workers:
./bdutil -e bigquery_env.sh,datastore_env.sh,extensions/spark/spark_env.sh -b myhdfsbucket -n 2 -e resize_env.sh run_command -t workers -u extensions/spark/start_single_spark_worker.sh -- "./start_single_spark_worker.sh"
# From now on, it's as if you originally turned up your cluster with "-n 5".
# When deleting, remember to include those extra workers:
./bdutil -b myhdfsbucket -n 5 delete
一般來說,最佳實踐的建議是凝結配置成一個文件,而不是總是傳遞標誌。例如,你的情況,你可能要一個名爲my_base_env.sh
文件:
import_env bigquery_env.sh
import_env datastore_env.sh
import_env extensions/spark/spark_env.sh
NUM_WORKERS=2
CONFIGBUCKET=myhdfsbucket
然後調整大小命令要短得多:
# Assuming you initially deployed with this command (default n == 2)
./bdutil -e my_base_env.sh deploy
# Before this step, edit resize_env.sh and set NEW_NUM_WORKERS to what you want.
# Currently it defaults to 5.
# Deploy only the new workers, e.g. {hadoop-w-2, hadoop-w-3, hadoop-w-4}:
./bdutil -e my_base_env.sh -e resize_env.sh deploy
# Explicitly start the Hadoop daemons on just the new workers:
./bdutil -e my_base_env.sh -e resize_env.sh run_command -t workers -- "service hadoop-hdfs-datanode start && service hadoop-mapreduce-tasktracker start"
# If using Spark as well, explicitly start the Spark daemons on the new workers:
./bdutil -e my_base_env.sh -e resize_env.sh run_command -t workers -u extensions/spark/start_single_spark_worker.sh -- "./start_single_spark_worker.sh"
# From now on, it's as if you originally turned up your cluster with "-n 5".
# When deleting, remember to include those extra workers:
./bdutil -b myhdfsbucket -n 5 delete
最後,這是不太100%一樣,如果你'd最初部署了集羣-n 5
;在這種情況下,主節點/home/hadoop/hadoop-install/conf/slaves
和/home/hadoop/spark-install/conf/slaves
上的文件將丟失新節點。如果您打算使用/home/hadoop/hadoop-install/bin/[stop|start]-all.sh
或/home/hadoop/spark-install/sbin/[stop|start]-all.sh
,則可以手動將SSH連接到主節點並編輯這些文件以將新節點添加到列表;如果沒有,那麼就不需要更改這些從屬文件。
太棒了!你的叉子仍然可用嗎?只是想知道在將現有的bdutil羣集添加到現有的bdutil羣集中並添加新的磁盤之前,最簡單的選擇是什麼。 – 2016-02-01 12:13:50
實際上,我們將'resize_env.sh'添加到[base bdutil repo](https://github.com/GoogleCloudPlatform/bdutil/blob/master/extensions/google/experimental/resize_env.sh),所以你不用'我不再需要去看它了。 – 2016-02-01 20:16:19