2016-09-29 34 views
6

我正在運行ELK堆棧和Docker進行日誌管理,當前配置爲ES 1.7,Logstash 1.5.4和Kibana 4.1.4。現在我試圖通過在Docker中使用tar.gz文件將Elasticsearch升級到2.4.0,在https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz找到。作爲ES 2.X不允許運行它作爲root用戶,我用Docker容器中的Elasticsearch 2.4.0中的root用戶

-Des.insecure.allow.root=true 

選項,同時運行elasticsearch服務,但我的容器不啓動。日誌沒有提到任何問題。

% Total % Received % Xferd Average Speed Time Time Time Current 
Dload Upload Total Spent Left Speed 
100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k 
//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found 

[email protected] start /opt/log-management/Scheduler 
node scheduler-app.js 

[email protected] start /opt/log-management/ESExportWrapper 
node app.js 
Jobs are registered 
[2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576] 
[2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea! 
Native thread-sleep not available. 
This will result in much slower performance, but it will still work. 
You should re-install spawn-sync or upgrade to the lastest version of node if possible. 
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details 
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] 
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ... 
Wed, 28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5 
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20 
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15 
[2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] 
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs] 
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb], compressed ordinary object pointers [true] 
[2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead 
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized 
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ... 
[2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} 
[2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA 

任何線索將不勝感激。

編輯1:作爲//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found是一個錯誤,碼頭工人形象沒有hostname效用,我嘗試使用uname -n命令來獲取HOSTNAME在ES。現在它不會拋出主機名錯誤,但問題依然存在。它不啓動。 是否正確使用替代品?

還有一個疑問,當我使用ES 1.7時,當前已啓動並正在運行,hostname實用程序不在其中,但運行沒有任何問題。很困惑。使用uname -n後 日誌:

% Total % Received % Xferd Average Speed Time Time  Time Current 
           Dload Upload Total Spent Left Speed 
100 1083 100 1083 0  0 1093k  0 --:--:-- --:--:-- --:--:-- 1057k 

> [email protected] start /opt/log-management/ESExportWrapper 
> node app.js 


> [email protected] start /opt/log-management/Scheduler 
> node scheduler-app.js 

Jobs are registered 
[2016-09-30 10:10:37,785][INFO ][bootstrap    ] max_open_files [1048576] 
[2016-09-30 10:10:37,822][WARN ][bootstrap    ] running as ROOT user. this is a bad idea! 
Native thread-sleep not available. 
This will result in much slower performance, but it will still work. 
You should re-install spawn-sync or upgrade to the lastest version of node if possible. 
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details 
[2016-09-30 10:10:37,993][INFO ][node      ] [Helleyes] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] 
[2016-09-30 10:10:37,993][INFO ][node      ] [Helleyes] initializing ... 
Fri, 30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5 
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20 
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15 
[2016-09-30 10:10:38,435][INFO ][plugins     ] [Helleyes] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] 
[2016-09-30 10:10:38,455][INFO ][env      ] [Helleyes] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs] 
[2016-09-30 10:10:38,456][INFO ][env      ] [Helleyes] heap size [7.8gb], compressed ordinary object pointers [true] 
[2016-09-30 10:10:38,483][WARN ][threadpool    ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead 
[2016-09-30 10:10:40,151][INFO ][node      ] [Helleyes] initialized 
[2016-09-30 10:10:40,152][INFO ][node      ] [Helleyes] starting ... 
[2016-09-30 10:10:40,278][INFO ][transport    ] [Helleyes] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} 
[2016-09-30 10:10:40,283][INFO ][discovery    ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ 
[2016-09-30 10:10:40,360][WARN ][transport.netty   ] [Helleyes] exception caught on transport layer [[id: 0x329b2977, /172.17.0.15:53388 => /10.240.118.69:9300]], closing connection 
java.lang.NullPointerException 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
[2016-09-30 10:10:40,360][WARN ][transport.netty   ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6, /172.17.0.15:46846 => /10.240.118.70:9300]], closing connection 
java.lang.NullPointerException 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
[2016-09-30 10:10:41,798][WARN ][transport.netty   ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6, /172.17.0.15:46958 => /10.240.118.70:9300]], closing connection 
java.lang.NullPointerException 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
[2016-09-30 10:10:41,800][WARN ][transport.netty   ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6, /172.17.0.15:53501 => /10.240.118.69:9300]], closing connection 
java.lang.NullPointerException 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
[2016-09-30 10:10:43,302][WARN ][transport.netty   ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f, /172.17.0.15:47057 => /10.240.118.70:9300]], closing connection 
java.lang.NullPointerException 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) 
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
[2016-09-30 10:10:43,303][WARN ][transport.netty   ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0, /172.17.0.15:53598 => /10.240.118.69:9300]], closing connection 
java.lang.NullPointerException 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) 
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) 
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) 
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) 
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 

java.util.concurrent.ThreadPoolExecutor中$ Worker.run(ThreadPoolExecutor.java:617) 在java.lang.Thread.run(Thread.java:745) [2016 -09-30 10:10:44,807] [INFO] [cluster.service] [Helleyes] new_master {Helleyes} {wvVGkhxnTqaa_wS5GGjZBQ} {10.240.118.68} {10.240.118.68:9300},原因:zen-disco-join(election_as_master ,[0]加入收到) [2016-09-30 10:10:44,852] [INFO] [http] [Helleyes] publish_address {10.240.118.68:9200},bound_addresses {[:: 1]:9200},{ 127.0.0.1:9200} [2016-09-30 10:10:44,852] [INFO] [node] [Helleyes]開始 [2016年9月30日10:10:44984] [INFO] [網關] [Helleyes]回收的[32]索引到cluster_state失敗部署

failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "", "failed": true, "item": {"url": "http://10.240.118.68:9200"}, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.240.118.68:9200"} 

編輯2後

錯誤:即使hostname效用安裝並正常工作,容器不會啓動。日誌與編輯1相同。

編輯3:容器確實開始但在地址http://nodeip:9200處無法到達。在3個節點中,只有1個有2.4個其他2個仍然有1.7個,2.4個不是集羣的一部分。在運行2.4的容器中,捲曲到localhost:9200給出了運行彈性搜索結果,但從外部無法訪問。

編輯4:我試着在羣集上運行ES 2.4的基本安裝,其中在相同的設置ES 1.7工作正常。我已經運行ES遷移插件來檢查集羣是否可以運行ES 2.4並且它給了我綠色。基本安裝細節如下

Dockerfile

#Pulling SLES12 thin base image 
FROM private-registry-1 

#Author 
MAINTAINER XYZ 

# Pre-requisite - Adding repositories 
RUN zypper ar private-registry-2 

RUN zypper --no-gpg-checks -n refresh 

#Install required packages and dependencies 
RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1 

#Downloading elasticsearch executable 
ENV ES_VERSION=2.4.0 
ENV ES_DIR="//opt//log-management//elasticsearch" 
ENV ES_CONFIG_PATH="${ES_DIR}//config" 
ENV ES_REST_PORT=9200 
ENV ES_INTERNAL_COM_PORT=9300 

WORKDIR /opt/log-management 
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate 
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \ 
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \ 
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} 

#Exposing elasticsearch server container port to the HOST 
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT} 

#Removing binary files which are not needed 
RUN zypper -n rm wget 

# Removing zypper repos 
RUN zypper rr caspiancs_common 

#Running elasticsearch executable 
WORKDIR ${ES_DIR} 
ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true 

docker build -t es-test . 

1編譯)當作爲一個評論說與docker run -d --name elasticsearch --net=host -p 9200:9200 -p 9300:9300 es-test運行並做curl localhost:9200這是運行在容器或節點內容器,我得到正確的答覆。我仍然無法到達9200端口上的其他集羣節點。

2)當docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 es-test運行並做curl localhost:9200容器內,它工作正常,但不能在節點給我的錯誤

curl: (56) Recv failure: Connection reset by peer 

我仍然無法達到集羣的其他節點上的9200端口。

編輯5:使用this answer on this question,我得到了運行ES 2.4的三個容器中的全部三個。但ES無法與所有這三個容器組成一個集羣。網絡配置如下 network.host : 0.0.0.0http.port: 9200

#configure elasticsearch.yml for clustering 
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml 

日誌與docker logs elasticsearch得到的以下內容:

[2016-10-06 12:31:28,887][WARN ][bootstrap    ] running as ROOT user. this is a bad idea! 
[2016-10-06 12:31:29,080][INFO ][node      ] [Screech] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] 
[2016-10-06 12:31:29,081][INFO ][node      ] [Screech] initializing ... 
[2016-10-06 12:31:29,652][INFO ][plugins     ] [Screech] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] 
[2016-10-06 12:31:29,684][INFO ][env      ] [Screech] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.7gb], net total_space [9.7gb], spins? [unknown], types [rootfs] 
[2016-10-06 12:31:29,684][INFO ][env      ] [Screech] heap size [989.8mb], compressed ordinary object pointers [true] 
[2016-10-06 12:31:29,720][WARN ][threadpool    ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead 
[2016-10-06 12:31:31,387][INFO ][node      ] [Screech] initialized 
[2016-10-06 12:31:31,387][INFO ][node      ] [Screech] starting ... 
[2016-10-06 12:31:31,456][INFO ][transport    ] [Screech] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300} 
[2016-10-06 12:31:31,465][INFO ][discovery    ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw 
[2016-10-06 12:31:34,500][WARN ][discovery.zen   ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}], retrying... 
ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300]; 
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002) 
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937) 
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911) 
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260) 
    at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444) 
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396) 
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96) 
    at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 

每當我提到運行容器作爲network.host主機的IP地址,我的老局面結束即只有一個容器運行ES 2.4,另外兩個運行1.7。

剛剛看到碼頭代理正在9300聽,或者「我認爲」它在聽。

elasticsearch-server/src/main/docker # netstat -nlp | grep 9300 
tcp  0  0 :::9300     :::*     LISTEN  6656/docker-proxy 

有關這方面的任何線索?

+0

添加您用來創建容器的命令,Dockerfile,docker-compose。還包括「碼頭信息」和「碼頭信息」。否則,變量太多。 – Alkaline

+0

你說日誌沒有提到任何問題,但你有:'''//opt/log-management // elasticsearch/bin/elasticsearch:line 134:hostname:command not found ''' –

+0

@michael_bitard是的,對不起。我完全忘了那個。我沒有發現有關該錯誤的任何信息。 – vvs14

回答

2

我能夠用以下設置形成羣集

network.publish_host=CONTAINER_HOST_ADDRESS即容器正在運行的節點的地址。 network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS

tranport.publish_port當你正在運行的背後ES代理/負載均衡器這樣的nginx或HAProxy的是很重要的。

0

嘗試使用-p標誌啓動容器時映射端口。

EXPOSE--expose都不以任何方式依賴於主機;這些規則默認情況下不會使主機可以從主機訪問端口。鑑於EXPOSE指令的限制,作爲Dockerfile作者,您通常應該包含EXPOSE規則,僅作爲提示端口將提供服務的提示。由容器的操作員來指定進一步的聯網規則。

嘗試映射你的端口,而執行docker run,例如docker run -p 9200:9200 -p 9300:9300 <image>:<tag>

+0

我已經這樣做了。不工作。 – vvs14

1

按照該documentation爲elasticsearch 2.x的默認network.host結合localhost

您需要明確設置network.host:0.0.0.0

舉例::

ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true -Des.network.host=0.0.0.0 
在此 answer指定
+0

爲什麼0.0.0.0?爲什麼不使用CONTAINER_PRIVATE_IP或CONTAINER_HOST_ADDRESS? – vvs14

+0

你可以使用CONTAINER_PRIVATE_IP或只是'network.host = _non_loopback_',但這意味着你不能在容器中使用環回。 0.0.0.0更符合elasticsearch 1.7的要求。 – keety

相關問題