2016-11-18 64 views
2

我正在Scala編寫Spark Streaming應用程序。該應用程序的目標是消費來自卡夫卡的最新記錄並將其打印到標準輸出。爲什麼Spark Streaming應用程序在使用Yafn上的Kafka時會停止?

當我在本地使用--master local[n]運行它時,該應用完美運行。然而,當我運行紗線的應用程序(併產生對,我從消費的話題),應用程序被卡在:

16/11/18 20:53:05 INFO JobScheduler: Added jobs for time 1479502385000 ms 

重複上述幾次線後,星火提供了以下錯誤:

從流UI
16/11/18 20:54:47 WARN TaskSetManager: Lost task 0.0 in stage 9.0 (TID 9, r3d3.hadoop.REDACTED.REDACTED): java.net.ConnectException: Connection timed out 
at sun.nio.ch.Net.connect0(Native Method) 
at sun.nio.ch.Net.connect(Net.java:454) 
at sun.nio.ch.Net.connect(Net.java:446) 
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) 
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) 
at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44) 
at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:142) 
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109) 
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108) 
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107) 
at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.fetchBatch(KafkaRDD.scala:150) 
at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.getNext(KafkaRDD.scala:162) 
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) 
at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
at org.apache.spark.util.NextIterator.foreach(NextIterator.scala:21) 
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
at org.apache.spark.util.NextIterator.to(NextIterator.scala:21) 
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
at org.apache.spark.util.NextIterator.toBuffer(NextIterator.scala:21) 
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
at org.apache.spark.util.NextIterator.toArray(NextIterator.scala:21) 
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927) 
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927) 
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) 
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
at org.apache.spark.scheduler.Task.run(Task.scala:89) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 

錯誤:

org.apache.spark.streaming.dstream.DStream.print(DStream.scala:757) 
com.REDACTED.bdp.Main$.main(Main.scala:88) 
com.REDACTED.bdp.Main.main(Main.scala) 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
java.lang.reflect.Method.invoke(Method.java:498) 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

從紗應用日誌錯誤(STDOUT):

java.lang.NullPointerException 
     at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.close(KafkaRDD.scala:158) 
     at org.apache.spark.util.NextIterator.closeIfNeeded(NextIterator.scala:66) 
     at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator$$anonfun$1.apply(KafkaRDD.scala:101) 
     at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator$$anonfun$1.apply(KafkaRDD.scala:101) 
     at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60) 
     at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79) 
     at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77) 
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
     at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77) 
     at org.apache.spark.scheduler.Task.run(Task.scala:91) 
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 
[2016-11-21 15:57:49,925] ERROR Exception in task 0.1 in stage 33.0 (TID 34) (org.apache.spark.executor.Executor) 
org.apache.spark.util.TaskCompletionListenerException 
     at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87) 
     at org.apache.spark.scheduler.Task.run(Task.scala:91) 
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 

從紗線應用程序日誌另一個錯誤:

[2016-11-21 15:52:32,264] WARN Exception encountered while connecting to the server : (org.apache.hadoop.ipc.Client) 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby 
     at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375) 
     at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:558) 
     at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:373) 
     at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:727) 
     at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:723) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:422) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:722) 
     at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373) 
     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1493) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1397) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1358) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) 
     at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:498) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) 
     at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) 
     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116) 
     at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315) 
     at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311) 
     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311) 
     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424) 
     at org.apache.spark.deploy.yarn.Client$.org$apache$spark$deploy$yarn$Client$$sparkJar(Client.scala:1195) 
     at org.apache.spark.deploy.yarn.Client$.populateClasspath(Client.scala:1333) 
     at org.apache.spark.deploy.yarn.ExecutorRunnable.prepareEnvironment(ExecutorRunnable.scala:290) 
     at org.apache.spark.deploy.yarn.ExecutorRunnable.env$lzycompute(ExecutorRunnable.scala:61) 
     at org.apache.spark.deploy.yarn.ExecutorRunnable.env(ExecutorRunnable.scala:61) 
     at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:80) 
     at org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:68) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 

怪異的是,時間約爲5%,該應用從卡夫卡讀取成功,無論出於何種原因。

羣集和YARN似乎工作正常。 羣集使用Kerberos進行安全保護。

什麼可能是此錯誤的來源?

+0

'5%的時間,應用程序成功地從卡夫卡讀取數據'並且'集羣使用Kerberos進行安全保護,這讓我想起這些事件之間的時間。那可能是因爲你的驗證令牌過期5分鐘後,你的流媒體作業開始失敗了嗎? (從未與kerberized /安全的Spark羣集一起工作)。 'r3d3.hadoop.REDACTED.REDACTED'是一個帶有Spark執行程序的主機,不是嗎?你可以從網頁界面粘貼流媒體標籤從開始到第一次失敗? –

+0

查看YARN日誌以查看執行者究竟發生了什麼>>在Spark驅動程序日誌中找到YARN作業ID(像'application_xxxx_xxxxxxxx'這樣的sthg),並用它來搜索YARN UI - 或者使用命令行'紗線狀態;紗線日誌 - 應用程序Id ' –

+0

我的作業在開始後立即失敗,如果我生產它試圖讀取的Kafka主題。 – dqian96

回答

0

tl; dr答案不提供答案,僅僅表明可能的下一步。

執行作業時,它無法完成而你的情況是一個Spark執行和卡夫卡經紀人之間的連接問題,當我的時候,可以報告streaming作業的Lost task事件的認識。

16/11/18 20:54:47 WARN TaskSetManager: Lost task 0.0 in stage 9.0 (TID 9, r3d3.hadoop.REDACTED.REDACTED): java.net.ConnectException: Connection timed out 
at sun.nio.ch.Net.connect0(Native Method) 
at sun.nio.ch.Net.connect(Net.java:454) 
at sun.nio.ch.Net.connect(Net.java:446) 
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) 
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) 
at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44) 
at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:142) 
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109) 
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108) 
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107) 
at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.fetchBatch(KafkaRDD.scala:150) 

pattern of the error message如下:

Lost task [id] in stage [taskSetId] (TID [tid], [host], executor [executorId]): [reason] 

這相當於你的情況爲對主機r3d3.hadoop.REDACTED.REDACTED運行星火執行。

的失敗原因是什麼如下它說:

java.net.ConnectException: Connection timed out 
at sun.nio.ch.Net.connect0(Native Method) 
at sun.nio.ch.Net.connect(Net.java:454) 
at sun.nio.ch.Net.connect(Net.java:446) 
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) 
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) 
at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44) 
at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:142) 
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109) 
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108) 
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108) 
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107) 

而且我會問自己一個卡夫卡代理時可能是一個客戶端不可用(在你的情況是一個Spark流媒體應用程序,它可能會或可能無助於瞭解問題的根源)。

我認爲這可能與Apache Spark無關,並會在卡夫卡圈子中尋找更多答案。

相關問題