0
testdb=# CREATE EXTERNAL TABLE sales_fact_1997 ( product_id int, time_id int, customer_id int, promotion_id int, store_id int, store_sales decimal, store_cost decimal, unit_sales decimal ) LOCATION ('gphdfs://hz-cluster2/user/nrpt/hive-server/foodmart.db/sales_fact_1997') FORMAT 'TEXT' (DELIMITER ',');
CREATE EXTERNAL TABLE
testdb=#
testdb=#
testdb=#
testdb=# select * from sales_fact_1997 ;
ERROR: external table gphdfs protocol command ended with error. Error occurred during initialization of VM (seg0 slice1 sdw1:40000 pid=3450)
DETAIL:
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Command: 'gphdfs://le/user/nrpt/hive-server/foodmart.db/sales_fact_1997'
External table sales_fact_1997, file gphdfs://hz-cluster2/user/nrpt/hive-server/foodmart.db/sales_fact_1997
我從hadoop-2.5.2/etc/hadoop/hadoop-env.sh文件更改了-Xmx的值,我看到可以使用的內存足夠JVM。但我仍然得到這個錯誤。 如下外部表gphdfs協議命令以錯誤結束。虛擬機初始化期間發生錯誤
@localhost ~]$ free -m
export GP_JAVA_OPT='-Xms20m -Xmx20m -XX:+DisplayVMOutputToStderr'
total used free shared buff/cache available
Mem: 993 114 393 219 485 518
Swap: 819 0 819
誰可以幫我,我創建的外部表成功,但我無法從HDFS中讀取數據。
感謝您的回答,我改變了GP_JAVA_OPT的XMX是20M,但我仍然得到錯誤@喬恩·羅伯茨 – wanghao
默認爲1000M,所以你需要增加,不會減少內存設置。您需要對所有分段主機進行此更改,並確保您的分段和gphdfs具有足夠的總內存。 –
我認爲錯誤的原因是沒有足夠的內存空間來初始化堆空間,所以我減少了堆空間的內存。我希望這個虛擬機可以使用較少的啓動。 @Jon Roberts – wanghao