2013-05-07 33 views
1

我使用:Cloudera的經理嵌入式PostgreSQL的蜂巢Metastore服務器的OutOfMemoryError問題

Cloudera Manager Free Edition: 4.5.1 
Cloudera Hadoop Distro: CDH 4.2.0-1.cdh4.2.0.p0.10 (Parcel) 
Hive Metastore with cloudera manager embedded PostgreSQL database. 

我的Cloudera管理器一個單獨的機器上運行,它不是羣集的一部分。

設置使用了Cloudera管理器集羣之後,我開始通過色調+蜂蠟使用蜂巢。

一切都運行良好了一會兒,然後全部suddden的,每當我跑針對有大量的分區(約14000)的特定表中的任何查詢,查詢開始超時:

FAILED: SemanticException org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out 

當我注意到這一點,我看了看日誌,並發現蜂巢Metastore連接已超時:

WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out 

已經看到了這一點,我想有一個與蜂巢metastore一個問題。所以,我看着日誌蜂巢metastore,發現java.lang.OutOfMemoryErrors:

/var/log/hive/hadoop-cmf-hive1-HIVEMETASTORE-hci-cdh01.hcinsight.net.log.out: 

2013-05-07 14:13:08,744 ERROR org.apache.thrift.ProcessFunction: Internal error processing get_partitions_  with_auth 
java.lang.OutOfMemoryError: Java heap space 
     at sun.reflectH.NativeConstructorAccessorImpl.newInstance0(Native Method) 
     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.jav  a:45) 
     at java.lang.reflect.Constructor.newInstance(Constructor.java:525) 
     at org.datanucleus.util.ClassUtils.newInstance(ClassUtils.java:95) 
     at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newLiteralParameter(SQLExpressi  onFactory.java:248) 
     at org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.getSQLStatementForIterator(RDBMSMapE  ntrySetStore.java:323) 
     at org.datanucleus.store.rdbms.scostore.RDBMSMapEntrySetStore.iterator(RDBMSMapEntrySetStore.java:  221) 
     at org.datanucleus.sco.SCOUtils.populateMapDelegateWithStoreData(SCOUtils.java:987) 
     at org.datanucleus.sco.backed.Map.loadFromStore(Map.java:258) 
     at org.datanucleus.sco.backed.Map.keySet(Map.java:509) 
     at org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.j  ava:118) 
     at org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFiel  dManager.java:114) 
     at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183) 
     at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.ja  va) 
     at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceFields(MStorageDescriptor.j  ava) 
     at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860) 
     at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879) 
     at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:16  47) 
     at org.datanucleus.store.fieldmanager.LoadFieldManager.processPersistable(LoadFieldManager.java:63  ) 
     at org.datanucleus.store.fieldmanager.LoadFieldManager.internalFetchObjectField(LoadFieldManager.j  ava:84) 
     at org.datanucleus.store.fieldmanager.AbstractFetchFieldManager.fetchObjectField(AbstractFetchFiel  dManager.java:104) 
     at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183) 
     at org.apache.hadoop.hive.metastore.model.MPartition.jdoReplaceField(MPartition.java) 
     at org.apache.hadoop.hive.metastore.model.MPartition.jdoReplaceFields(MPartition.java) 
     at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860) 
     at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879) 
     at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java:16  47) 
     at org.datanucleus.ObjectManagerImpl.performDetachAllOnTxnEndPreparation(ObjectManagerImpl.java:35  52) 
     at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3291) 
     at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:369) 
     at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256) 

在這一點上,蜂巢metastore得到關閉並重新啓動:

2013-05-07 14:39:40,576 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Shutting down hive metastore. 
2013-05-07 14:41:09,979 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Starting hive metastore on po  rt 9083 

現在,爲了解決這個問題,我已經改變了兩個蜂巢metastore服務器和蜂蠟服務器的最大堆大小:

1. Hive/Hive Metastore Server(Base)/Resource Management/Java Heap Size of Metastore Server : 2 GiB (First thing I did.) 
2. Hue/Beeswax Server(Base)/Resource Management/Java Heap Size of Beeswax Server : 2 GiB (After reading some groups posts and stuff online, I tried this as well.) 

這些都不超過2個步驟似乎已經幫助我繼續看OOMEs中遇到的蜂巢astore日誌。

然後我注意到,實際metastore「數據庫」正在運行作爲我的Cloudera管理器的一部分,如果是PostgreSQL的進程運行內存我不知道。我尋找方法來增加該進程的java堆大小,並發現很少有關於此的文檔。

我在想,如果你們的人能夠幫我解決這個問題。

我應加大對嵌入式數據庫Java堆大小?如果是這樣,我會在哪裏做到這一點?

有沒有別的東西,我失蹤?

謝謝!

+0

忘了提: 當我有很多分區的訪問表此問題只發生。對其他表的查詢運行良好。 – 2013-05-07 23:37:28

+0

此外,當我從配置單元命令行shell運行時也會發生同樣的問題。所以,我猜這與蜜蜂或色相界面無關。 – 2013-05-08 15:58:36

+0

另一個有趣的事情是,當我今天早上將蜂巢Metastore java堆大小增加到4GB並嘗試時,我沒有看到任何OOME(至少現在),但查詢仍然超時。 – 2013-05-08 16:04:53

回答

2

你有沒有嘗試做下面的事情。

'SET hive.metastore.client.socket.timeout=300;' 

這解決了這個問題對我來說。讓我知道它是如何去的。