2014-06-24 37 views
0

我將Tableau連接到Hive。它在過去的5-7天完美地工作。 但現在它停止連接到蜂巢與下面的錯誤。 「[HiveODBC](34):從蜂巢錯誤:內部錯誤處理ExecuteStatement」。HiveODBC - 來自配置單元的錯誤:內部錯誤處理ExecuteStatement

這裏是從hiveserver2

2014-06-23 06:06:19,888 ERROR [pool-5-thread-5]: thrift.ProcessFunction (ProcessFunction.java:process(41)) – Internal error processing ExecuteStatement 
    java.lang.OutOfMemoryError: unable to create new native thread 
    at java.lang.Thread.start0(Native Method) 
    at java.lang.Thread.start(Thread.java:713) 
    at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949) 
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360) 
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110) 
    at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:638) 
    at org.apache.hadoop.hive.ql.hooks.ATSHook.run(ATSHook.java:84) 
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1211) 
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1089) 
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:912) 
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:907) 
    at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144) 
    at org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:174) 
    at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231) 
    at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatement(HiveSessionImpl.java:212) 
    at org.apache.hive.service.cli.CLIService.executeStatement(CLIService.java:220) 
    at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:346) 
    at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313) 
    at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298) 
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)   
    at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:55) 
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:744) 

日誌我只是谷歌並發現一個有內存泄漏錯誤,但我不知道的是,在蜂巢0.13版本太多,我使用HDP 2.1 ambari 1.5

+0

我相信這可能是由於系統內存不足以創建文件描述符。你可以運行'ulimit -a'併發布結果嗎?如果它是'1024',我會建議將它改爲'2048'。 – visakh

+0

雖然我現在恢復了服務器並且運行正常。但我懷疑錯誤可能會再次發生。這裏是限制輸出。 – user3769729

+0

的ulimit -a 核心文件大小(塊,-c)0 數據SEG大小(字節,-d)無限 調度優先級(-e)0 文件大小(塊,-f)無限 掛起信號(-i )59900 最大鎖定存儲器(千字節,-1)64 最大存儲器大小(字節,-m)無限 打開的文件(-n)32768 管大小(512個字節,-p)8個 POSIX消息隊列(字節, -q)819200 實時優先級(-r)0 堆棧大小(千字節,-s)8192 cpu時間(秒,-t)無限制 – user3769729

回答

相關問題